-
Notifications
You must be signed in to change notification settings - Fork 129
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
cb87a0f
commit a920660
Showing
1 changed file
with
1 addition
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
{"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"Augmentor_ZeroCostDL4Mic.ipynb","provenance":[{"file_id":"1GGcClcYdjPIwSQEE4LPi9G6Yr7Q1z_6c","timestamp":1592382350044},{"file_id":"1mqcexfPBaIWuvMWWbJZUFtPoZoJJwrEA","timestamp":1589278334507},{"file_id":"159ARwlQE7-zi0EHxunOF_YPFLt-ZVU5x","timestamp":1587562499898},{"file_id":"1W-7NHehG5MRFILvZZzhPWWnOdJMkadb2","timestamp":1586332290412},{"file_id":"1pUetEQICxYWkYVaQIgdRH1EZBTl7oc2A","timestamp":1586292199692},{"file_id":"1MD36ZkM6XR9EuV12zimJmfCjzyeYZFWq","timestamp":1586269469061},{"file_id":"16A2mbaHzlEElntS8qkFBOsBvZG-mUeY6","timestamp":1586253795726},{"file_id":"1gJlcjOiSxr2buDOxmcFbT_d-GqwLjXtK","timestamp":1583343225796},{"file_id":"10yGI51WzHfgWgZAyE-EbkZFEvIOd6CP6","timestamp":1583171396283}],"collapsed_sections":[],"toc_visible":true},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.6.4"},"kernelspec":{"name":"python3","display_name":"Python 3"}},"cells":[{"cell_type":"markdown","metadata":{"id":"V9zNGvape2-I","colab_type":"text"},"source":["# **Augmentor**\n","\n","<font size = 4>Data augmentation can improve training progress by amplifying differences in the dataset. This can be useful if the available dataset is small since, in this case, it is possible that a network could quickly learn every example in the dataset (overfitting), without augmentation. Augmentation is not necessary for training and if your training dataset is large you should disable it.\n","\n","\n","---\n","\n","<font size = 4>*Disclaimer*:\n","\n","<font size = 4>This notebook is part of the *Zero-Cost Deep-Learning to Enhance Microscopy* project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.\n","\n","<font size = 4>[Augmentor](https://github.com/mdbloice/Augmentor) was described in the following article:\n","\n","<font size = 4>Marcus D Bloice, Peter M Roth, Andreas Holzinger, Biomedical image augmentation using Augmentor, Bioinformatics, https://doi.org/10.1093/bioinformatics/btz259\n","\n","<font size = 4>**Please also cite this original paper when using or developing this notebook.**"]},{"cell_type":"markdown","metadata":{"id":"jWAz2i7RdxUV","colab_type":"text"},"source":["# **How to use this notebook?**\n","\n","---\n","\n","<font size = 4>Video describing how to use our notebooks are available on youtube:\n"," - [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook\n"," - [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook\n","\n","\n","---\n","###**Structure of a notebook**\n","\n","<font size = 4>The notebook contains two types of cell: \n","\n","<font size = 4>**Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.\n","\n","<font size = 4>**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.\n","\n","---\n","###**Table of contents, Code snippets** and **Files**\n","\n","<font size = 4>On the top left side of the notebook you find three tabs which contain from top to bottom:\n","\n","<font size = 4>*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.\n","\n","<font size = 4>*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.\n","\n","<font size = 4>*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here. \n","\n","<font size = 4>**Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.\n","\n","<font size = 4>**Note:** The \"sample data\" in \"Files\" contains default files. Do not upload anything in here!\n","\n","---\n","###**Making changes to the notebook**\n","\n","<font size = 4>**You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.\n","\n","<font size = 4>To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).\n","You can use the `#`-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment."]},{"cell_type":"markdown","metadata":{"id":"DMNHVZfHmbKb","colab_type":"text"},"source":["# **1. Mount your Google Drive**\n","---\n","\n","\n","\n","\n"]},{"cell_type":"markdown","metadata":{"id":"UBrnApIUBgxv","colab_type":"text"},"source":["\n","<font size = 4> To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook.\n","\n","<font size = 4> Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive. \n","\n","<font size = 4> Once this is done, your data are available in the **Files** tab on the top left of notebook."]},{"cell_type":"code","metadata":{"id":"01Djr8v-5pPk","colab_type":"code","cellView":"form","colab":{}},"source":["#@markdown ##Run this cell to connect your Google Drive to Colab\n","\n","#@markdown * Click on the URL. \n","\n","#@markdown * Sign in your Google Account. \n","\n","#@markdown * Copy the authorization code. \n","\n","#@markdown * Enter the authorization code. \n","\n","#@markdown * Click on \"Files\" site on the right. Refresh the site. Your Google Drive folder should now be available here as \"drive\". \n","\n","#mounts user's Google Drive to Google Colab.\n","\n","from google.colab import drive\n","drive.mount('/content/gdrive')"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"n4yWFoJNnoin","colab_type":"text"},"source":["# **2. Install Augmentor and Dependencies**\n","---\n"]},{"cell_type":"code","metadata":{"id":"3u2mXn3XsWzd","colab_type":"code","cellView":"form","colab":{}},"source":["#@markdown ##Install Augmentor and dependencies\n","\n","\n","\n","#Here, we install libraries which are not already included in Colab.\n","\n","\n","!pip install Augmentor\n","import Augmentor\n","import os\n","\n","\n","\n","\n","# ------- Common variable to all ZeroCostDL4Mic notebooks -------\n","import numpy as np\n","from matplotlib import pyplot as plt\n","import urllib\n","import os, random\n","import shutil \n","import zipfile\n","from tifffile import imread, imsave\n","import time\n","import sys\n","from pathlib import Path\n","import pandas as pd\n","import csv\n","from glob import glob\n","from scipy import signal\n","from scipy import ndimage\n","from skimage import io\n","from sklearn.linear_model import LinearRegression\n","from skimage.util import img_as_uint\n","import matplotlib as mpl\n","from skimage.metrics import structural_similarity\n","from skimage.metrics import peak_signal_noise_ratio as psnr\n","from astropy.visualization import simple_norm\n","from skimage import img_as_float32\n","from skimage.util import img_as_ubyte\n","from tqdm import tqdm \n","\n","\n","# Colors for the warning messages\n","class bcolors:\n"," WARNING = '\\033[31m'\n","\n","#Disable some of the tensorflow warnings\n","import warnings\n","warnings.filterwarnings(\"ignore\")\n","\n","print(\"Libraries installed\")"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"5LEowmfAWqPs","colab_type":"text"},"source":["## **3.2. Data augmentation**\n","---\n","<font size = 4>\n","\n","\n","\n"]},{"cell_type":"code","metadata":{"id":"OsIBK-sywkfy","colab_type":"code","cellView":"form","colab":{}},"source":["#Data augmentation\n","\n","\n","Training_source = \"\" #@param {type:\"string\"}\n","\n","Matching_Training_target = False #@param {type:\"boolean\"}\n","\n","Training_target = \"\" #@param {type:\"string\"}\n","\n","Random_Crop = True #@param {type:\"boolean\"}\n","\n","Crop_size = #@param {type:\"number\"}\n","\n","\n","#@markdown ####Choose a factor by which you want to multiply your original dataset\n","\n","Multiply_dataset_by = 4 #@param {type:\"slider\", min:1, max:30, step:1}\n","\n","Saving_path = \"\" #@param {type:\"string\"}\n","\n","\n","#@markdown ###If not, please choose the probability of the following image manipulations to be used to augment your dataset (1 = always used; 0 = disabled ):\n","\n","#@markdown ####Mirror and rotate images\n","rotate_90_degrees = 0.5 #@param {type:\"slider\", min:0, max:1, step:0.1}\n","\n","rotate_270_degrees = 0.5 #@param {type:\"slider\", min:0, max:1, step:0.1}\n","\n","flip_left_right = 0.5 #@param {type:\"slider\", min:0, max:1, step:0.1}\n","\n","flip_top_bottom = 0.5 #@param {type:\"slider\", min:0, max:1, step:0.1}\n","\n","#@markdown ####Random image Zoom\n","\n","random_zoom = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n","\n","random_zoom_magnification = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n","\n","#@markdown ####Random image distortion\n","\n","random_distortion = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n","\n","\n","#@markdown ####Image shearing and skewing \n","\n","image_shear = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n","max_image_shear = 1 #@param {type:\"slider\", min:1, max:25, step:1}\n","\n","skew_image = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n","\n","skew_image_magnitude = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n","\n","\n","list_files = os.listdir(Training_source)\n","Nb_files = len(list_files)\n","\n","Nb_augmented_files = (Nb_files * Multiply_dataset_by)\n","\n","\n","Augmented_folder = Saving_path+\"/Augmented_Folder\"\n","if os.path.exists(Augmented_folder):\n"," shutil.rmtree(Augmented_folder)\n","os.makedirs(Augmented_folder)\n","\n"," \n","Training_source_augmented = Saving_path+\"/Training_source_augmented\"\n","\n","if os.path.exists(Training_source_augmented):\n"," shutil.rmtree(Training_source_augmented)\n","os.makedirs(Training_source_augmented)\n","\n","if Matching_Training_target:\n"," #Training_target_augmented = \"/content/Training_target_augmented\"\n"," Training_target_augmented = Saving_path+\"/Training_target_augmented\"\n","\n"," if os.path.exists(Training_target_augmented):\n"," shutil.rmtree(Training_target_augmented)\n"," os.makedirs(Training_target_augmented)\n","\n","\n","# Here we generate the augmented images\n","#Load the images\n","p = Augmentor.Pipeline(Training_source, Augmented_folder)\n","\n","#Define the matching images\n","if Matching_Training_target:\n"," p.ground_truth(Training_target)\n","#Define the augmentation possibilities\n","\n","\n","\n","if Random_Crop:\n"," p.crop_by_size(probability=1, width=Crop_size, height=Crop_size, centre=False)\n","\n","if not rotate_90_degrees == 0:\n"," p.rotate90(probability=rotate_90_degrees)\n"," \n","if not rotate_270_degrees == 0:\n"," p.rotate270(probability=rotate_270_degrees)\n","\n","if not flip_left_right == 0:\n"," p.flip_left_right(probability=flip_left_right)\n","\n","if not flip_top_bottom == 0:\n"," p.flip_top_bottom(probability=flip_top_bottom)\n","\n","if not random_zoom == 0:\n"," p.zoom_random(probability=random_zoom, percentage_area=random_zoom_magnification)\n"," \n","if not random_distortion == 0:\n"," p.random_distortion(probability=random_distortion, grid_width=4, grid_height=4, magnitude=8)\n","\n","if not image_shear == 0:\n"," p.shear(probability=image_shear,max_shear_left=20,max_shear_right=20)\n"," \n","if not skew_image == 0:\n"," p.skew(probability=skew_image,magnitude=skew_image_magnitude)\n","\n","p.sample(int(Nb_augmented_files))\n","\n","print(int(Nb_augmented_files),\"images generated\")\n","\n","# Here we sort through the images and move them back to augmented trainning source and targets folders\n","\n","augmented_files = os.listdir(Augmented_folder)\n","\n","for f in augmented_files:\n","\n"," if (f.startswith(\"_groundtruth_(1)_\")):\n"," shortname_noprefix = f[17:]\n"," shutil.copyfile(Augmented_folder+\"/\"+f, Training_target_augmented+\"/\"+shortname_noprefix) \n"," if not (f.startswith(\"_groundtruth_(1)_\")):\n"," shutil.copyfile(Augmented_folder+\"/\"+f, Training_source_augmented+\"/\"+f)\n"," \n","\n","for filename in os.listdir(Training_source_augmented):\n"," os.chdir(Training_source_augmented)\n"," os.rename(filename, filename.replace('_original', ''))\n"," \n"," #Here we clean up the extra files\n","shutil.rmtree(Augmented_folder)\n","\n","\n","\n","\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"hvkd66PldsXB","colab_type":"text"},"source":["## **6.3. Download your images**\n","---\n","\n","<font size = 4>**Store your data** and ALL its results elsewhere by downloading it from Google Drive and after that clean the original folder tree (datasets, results, trained model etc.) if you plan to train or use new networks. Please note that the notebook will otherwise **OVERWRITE** all files which have the same name."]},{"cell_type":"markdown","metadata":{"id":"Rn9zpWpo0xNw","colab_type":"text"},"source":["\n","#**Thank you for using Augmentor!**"]}]} |