diff --git a/CVPR-2019-TL;DR.md b/CVPR-2019-TL;DR.md
new file mode 100644
index 0000000..90110e2
--- /dev/null
+++ b/CVPR-2019-TL;DR.md
@@ -0,0 +1,85 @@
+
+
+A subset of [CVPR 2019](http://cvpr2019.thecvf.com) papers worth having a look at. Didn't have time to read them or read tweets about them ?
+Here you'll find a TL;DR version of subset (~30) of [paper(s)](http://openaccess.thecvf.com/CVPR2019.py). Assuming you know the state of the art for the problem investigated in the paper.
+
+`DONE : 3/32 `
+
+### [Unsupervised Deep Tracking](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Unsupervised_Deep_Tracking_CVPR_2019_paper.pdf)
+
+Visual tracking as similarity search. Extract features from a query **q-bbox** of car f.ex and [compute a distance metric](http://www.robots.ox.ac.uk/~luca/siamese-fc.html) over all spatial locations in a target image. Penalise all locations where **q-bbox** and target **t-bbox** don't match. For this one would need annotation of the object being tracked. One could use a ranking objective or product regression targets and compute IOU loss. Why use annotation when you can play the video forwards and backwards. A good tracker should do both. Given a query **q-bbox** find the best **t-bbox** in another video frame. Use the **t-bbox** as query now and a good tracker should be able to recover the **q-bbox**. If it doesn't penalise with IOU loss.
+
+### [Do Better ImageNet Models Transfer Better?](http://openaccess.thecvf.com/content_CVPR_2019/papers/Kornblith_Do_Better_ImageNet_Models_Transfer_Better_CVPR_2019_paper.pdf)
+
+Yes. Models which do better on a base task (Task-1) transfer nicely to another (Task-2). Training MobileNetV2 on ILSVRC2012 (Task-1) and fine-tuning on MS COCO for Semseg (Task-2). Semseg would do better if you used a better base model f.ex ResNet101 which has higher accuracy on ILSVRC2012. Where does this break down
+- If Task-2 is not a natural extension of Task-1. F.ex Image Aesthetics
+- Dataset For Task-2 is as large as for Task-1
+- Regularisation applied for Task-1 (weight decay, weight norm) harms Task-2
+Why does this happen - No clear answer : Authors speculate large canvas over-paramaterised networks are better at finding the plateau which is suitable for Task-2
+
+### [Learning to Sample](http://openaccess.thecvf.com/content_CVPR_2019/papers/Dovrat_Learning_to_Sample_CVPR_2019_paper.pdf)
+Compressing 3D point clouds while preserving downstream task accuracy. Using a model which process 3D point clouds. First convert each 3D point to a feature vector using [1x1 convolutions](https://web.stanford.edu/~rqi/pointnet/docs/cvpr17_pointnet_slides.pdf). Run global max-pooling feature wise. Add **few** dense layers and generate sampled 3D point cloud. Nutshell - 3D input Nx3 to PointNet output Kx3. The generated point clouds is not guaranteed to be a subset of input. Do a NN matching with L2 distance to match. The matching process is only applied at inference time, as the final step of inference. During training, the gener- ated points are processed by the task network as-is, since the matching is not differentiable and cannot propagate the task loss. What if you want sampling size K as part of the network ? Train an network which input & output is Nx3 except that the points are ordered according to their importance in minimising a downstream task. YOLO
+
+### [Fully Quantized Network for Object Detection](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Fully_Quantized_Network_for_Object_Detection_CVPR_2019_paper.pdf)
+
+### [Learning Metrics from Teachers: Compact Networks for Image Embedding](http://openaccess.thecvf.com/content_CVPR_2019/papers/Yu_Learning_Metrics_From_Teachers_Compact_Networks_for_Image_Embedding_CVPR_2019_paper.pdf)
+
+### [What Do Single-view 3D Reconstruction Networks Learn?](http://openaccess.thecvf.com/content_CVPR_2019/papers/Tatarchenko_What_Do_Single-View_3D_Reconstruction_Networks_Learn_CVPR_2019_paper.pdf)
+
+### [Zoom to Learn, Learn to Zoom](http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhang_Zoom_to_Learn_Learn_to_Zoom_CVPR_2019_paper.pdf)
+
+### [Unsupervised Image Captioning](http://openaccess.thecvf.com/content_CVPR_2019/papers/Feng_Unsupervised_Image_Captioning_CVPR_2019_paper.pdf)
+
+### [Ranked List Loss for Deep Metric Learning](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Ranked_List_Loss_for_Deep_Metric_Learning_CVPR_2019_paper.pdf)
+
+
+**Naive triplet** : Sample `[+, anchor, -]` == **triplet**. Pull `+` and `anchor` inside a sphere of diameter alpha - margin. Push -ives outside a sphere of radius alpha. This fails when all your triplet samples become easy to separate and loss is almost zero and your averaged weight gradient over a mini-batch doesn't move the parameters of the DL model. Try other idea **(N-pair-mc)**, build a smarter mini-batch. Take pairs from N different classes and build triplets on the fly. Use positives from different classes as Negatives. This can further improved by instead of pushing all -ives away push a point aways which represents them all. Heuristically computed as the point closest to the `+` sample **(proxy-NCA)**. Use anchor as positive and positive as anchor and you get **Lifted Struct**. **Main idea** : In a batch you have multiple samples of each class. Get a `+` query find all violations of samples belonging to the same class and from -ive set. Computed standard triplet loss for all +ive violations and weighted loss for all -ive violations. weighted by the margin of violation (to get hard negative). Viola you done !
+
+
+### [Recurrent Neural Network for (Un-)supervised Learning of Monocular Video Visual Odometry and Depth](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Recurrent_Neural_Network_for_Un-Supervised_Learning_of_Monocular_Video_Visual_CVPR_2019_paper.pdf)
+
+### [Arbitrary Style Transfer with Style-Attentional Networks](http://openaccess.thecvf.com/content_CVPR_2019/papers/Park_Arbitrary_Style_Transfer_With_Style-Attentional_Networks_CVPR_2019_paper.pdf)
+
+### [Art2Real: Unfolding the Reality of Artworks via Semantically-Aware Image-to-Image Translation](http://openaccess.thecvf.com/content_CVPR_2019/papers/Tomei_Art2Real_Unfolding_the_Reality_of_Artworks_via_Semantically-Aware_Image-To-Image_Translation_CVPR_2019_paper.pdf)
+
+### [Knowledge-Embedded Routing Network for Scene Graph Generation](http://openaccess.thecvf.com/content_CVPR_2019/papers/Chen_Knowledge-Embedded_Routing_Network_for_Scene_Graph_Generation_CVPR_2019_paper.pdf)
+
+### [Interpreting CNNs via Decision Trees](http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhang_Interpreting_CNNs_via_Decision_Trees_CVPR_2019_paper.pdf)
+
+### [L3-Net: Towards Learning based LiDAR Localization for Autonomous Driving](http://openaccess.thecvf.com/content_CVPR_2019/papers/Lu_L3-Net_Towards_Learning_Based_LiDAR_Localization_for_Autonomous_Driving_CVPR_2019_paper.pdf)
+
+### [Locating Objects Without Bounding Boxes](http://openaccess.thecvf.com/content_CVPR_2019/papers/Ribera_Locating_Objects_Without_Bounding_Boxes_CVPR_2019_paper.pdf)
+
+### [Learning from Synthetic Data for Crowd Counting in the Wild](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Learning_From_Synthetic_Data_for_Crowd_Counting_in_the_Wild_CVPR_2019_paper.pdf)
+
+### [Unsupervised Image Matching and Object Discovery as Optimization](http://openaccess.thecvf.com/content_CVPR_2019/papers/Vo_Unsupervised_Image_Matching_and_Object_Discovery_as_Optimization_CVPR_2019_paper.pdf)
+
+### [Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving](http://openaccess.thecvf.com/content_CVPR_2019/papers/Vo_Unsupervised_Image_Matching_and_Object_Discovery_as_Optimization_CVPR_2019_paper.pdf)
+
+### [PointConv: Deep Convolutional Networks on 3D Point Clouds](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wu_PointConv_Deep_Convolutional_Networks_on_3D_Point_Clouds_CVPR_2019_paper.pdf)
+
+### [Inserting Videos into Videos](http://openaccess.thecvf.com/content_CVPR_2019/papers/Lee_Inserting_Videos_Into_Videos_CVPR_2019_paper.pdf)
+
+### [Self-Supervised Representation Learning by Rotation Feature Decoupling](http://openaccess.thecvf.com/content_CVPR_2019/papers/Feng_Self-Supervised_Representation_Learning_by_Rotation_Feature_Decoupling_CVPR_2019_paper.pdf)
+
+### [Grounding Human-to-Vehicle Advice for Self-driving Vehicles](http://openaccess.thecvf.com/content_CVPR_2019/papers/Kim_Grounding_Human-To-Vehicle_Advice_for_Self-Driving_Vehicles_CVPR_2019_paper.pdf)
+
+### [Practical Full Resolution Learned Lossless Image Compression](http://openaccess.thecvf.com/content_CVPR_2019/papers/Mentzer_Practical_Full_Resolution_Learned_Lossless_Image_Compression_CVPR_2019_paper.pdf)
+
+### [SoDeep: a Sorting Deep net to learn ranking loss surrogates](http://openaccess.thecvf.com/content_CVPR_2019/papers/Engilberge_SoDeep_A_Sorting_Deep_Net_to_Learn_Ranking_Loss_Surrogates_CVPR_2019_paper.pdf)
+
+### [Where’s Wally Now? Deep Generative and Discriminative Embeddings for Novelty Detection](http://openaccess.thecvf.com/content_CVPR_2019/papers/Burlina_Wheres_Wally_Now_Deep_Generative_and_Discriminative_Embeddings_for_Novelty_CVPR_2019_paper.pdf)
+
+### [Deep Single Image Camera Calibration with Radial Distortion](http://openaccess.thecvf.com/content_CVPR_2019/papers/Lopez_Deep_Single_Image_Camera_Calibration_With_Radial_Distortion_CVPR_2019_paper.pdf)
+
+### [Self-Supervised GANs via Auxiliary Rotation Loss](http://openaccess.thecvf.com/content_CVPR_2019/papers/Chen_Self-Supervised_GANs_via_Auxiliary_Rotation_Loss_CVPR_2019_paper.pdf)
+
+### [Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation](http://openaccess.thecvf.com/content_CVPR_2019/papers/Ranjan_Competitive_Collaboration_Joint_Unsupervised_Learning_of_Depth_Camera_Motion_Optical_CVPR_2019_paper.pdf)
+
+### [Variational Autoencoders Pursue PCA Directions (by Accident)](http://openaccess.thecvf.com/content_CVPR_2019/papers/Rolinek_Variational_Autoencoders_Pursue_PCA_Directions_by_Accident_CVPR_2019_paper.pdf)
+
+### [PointPillars: Fast Encoders for Object Detection from Point Clouds](http://openaccess.thecvf.com/content_CVPR_2019/papers/Lang_PointPillars_Fast_Encoders_for_Object_Detection_From_Point_Clouds_CVPR_2019_paper.pdf)
+
+### [Leveraging Heterogeneous Auxiliary Tasks to Assist Crowd Counting](http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhao_Leveraging_Heterogeneous_Auxiliary_Tasks_to_Assist_Crowd_Counting_CVPR_2019_paper.pdf)
+
+### [Test of Time Award : Online Dictionary Learning for Sparse Coding](http://thoth.inrialpes.fr/people/mairal/resources/pdf/test_of_time.pdf)
\ No newline at end of file
diff --git a/Machine-Learning-jokes.md b/Machine-Learning-jokes.md
new file mode 100644
index 0000000..66893dd
--- /dev/null
+++ b/Machine-Learning-jokes.md
@@ -0,0 +1,16 @@
+Quoting the researcher D. Ross in her [prior art](https://www.youtube.com/watch?v=4GtyMeEcPPE) explaining how self-supervised video representation learning from rotation works:
+
+```bash
+Upside down
+Boy, you turn me
+Inside out
+And round and round
+```
+
+
+
+`Is this real life, is this just fantasy` - Self supervised learning in virtual environment(s), Bahnhof, Gorli - Symposium of Localisation Agnostic Canonical Knowledge, 2023. Xberg
+
+
+
+
diff --git a/Onboarding-Perception.md b/Onboarding-Perception.md
new file mode 100644
index 0000000..9d46667
--- /dev/null
+++ b/Onboarding-Perception.md
@@ -0,0 +1,118 @@
+# Onboarding - Perception
+
+Hey ! Hey Welcome to our team :) So glad you join us in our adventure to build technology powering the [future of mobility](https://66.media.tumblr.com/tumblr_mdv51hvvCe1ry1tp9.gif). Here’s all the information you’d need to get started. Incase you are blocked, feel free to ping your [buddy](https://i.chzbgr.com/maxW500/7333828608/hE3E4FEA4/) or your teammates - [Say hi!](https://66.media.tumblr.com/tumblr_mcg6eeCije1ry8teo.gif) ! But first, add your [photo](https://i.imgur.com/jZhsTc7.gif) in the table below :fireworks:
+
+## Your Team
+
+Mo | Dani | Harsi | Emna | Francesco | Nicolai |
+:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------|:-----------------------------:|:-----------------------------
+
|
|
|
|
|
|
+
+## Your Workstation
+You have your very one ML workstation ! Ask your buddy to provide you with your credentials (which you should change ASAP). Only you have access to your workstation. Each workstation is fitted with at least 1 Titan V which is yours for [running inference/training locally](https://66.media.tumblr.com/tumblr_mdv51hvvCe1ry1tp9.gif). For training on larger datasets we have AWS credentials for you (more on this below), [ping your buddy](https://66.media.tumblr.com/tumblr_m5ezckIAL81r2hf4ro1_500.gif) for more information. Workstations are pre-installed with 1. `Ubuntu 18.04` 2. `Cuda 10.1` 3. `v418.67` Nvidia Drivers. You'll have to install standard libraries and python packages before starting running cool stuff. Follow the [instructions](https://gitlab.mobilityservices.io/am/roam/perception/team-space/cv-ml-resources/wikis/Workstation-installation) here to get started.
+
+## Our perception@gitlab
+All perception related repositories are [here](https://gitlab.mobilityservices.io/am/roam/perception). If you can read this you already have access to our repositories:) A good place to start is our data [catalogue](https://gitlab.mobilityservices.io/am/roam/perception/data-catalogue). If you need access to our drive data you can use scripts within the [catalogue](https://gitlab.mobilityservices.io/am/roam/perception/data-catalogue). As our data sources keep growing, we'd like to make sure everyone has access to all the datasets they need :dancer: . If you plan to use a [public dataset](https://gitlab.mobilityservices.io/am/roam/perception/cv-ml-resources#datasets) please discuss with your team member regarding the LICENSE.
+
+## Your AWS account
+
+Its easy to work your AWS account with `aws cli` set up both on your workstation and your notebook
+For Unix machine skip the steps below. For Mac you'd need to [jump through few hoops](https://66.media.tumblr.com/tumblr_mcythlotza1ry8teo.gif) before setting up awscli.
+
+Install `developer tools` + `Homebrew` + `python-pip` on MacBook. Skip this step for your workstation.
+```
+/usr/bin/xcode-select --install
+/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
+brew install python
+```
+Install [AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/index.html)
+```
+# Both Unix and Mac
+pip install awscli
+```
+
+Follow the instructions in the [infrastructure wiki](https://gitlab.mobilityservices.io/infra/infra-docs/wikis/FAQ#aws) to login to AWS with KeyCloak.
+
+Now test your AWS set-up
+```
+# Both Unix and Mac
+aws s3 ls s3://das-perception/perception_onboarding/ --profile DASPerceptionDev
+
+2019-04-12 16:18:49 16756 dani.jpeg
+2019-04-12 16:16:36 278860 harsi.jpg
+2019-04-12 16:18:39 135149 mo.png
+2019-04-12 16:18:24 6 team.txt
+```
+
+## Our S3 Buckets
+
+Location | Usage | Example
+:-------------------------|:-------------------------|:-------------------------
+s3://das-perception/perception_data | Drive Data/Open source Data | Videos, GPS, Lidar
+s3://das-perception/perception_experiments | Training Data/Docker Images | tf-records etc
+s3://das-perception/perception_logs | Logging your experiments | tensorflow summary events
+s3://das-perception/perception_models | Pre-trained models/your models | ResNet101 pre-trained on ImageNet
+
+Now you can push data to S3 and start [training your models](http://s.pikabu.ru/post_img/2013/04/07/7/1365327582_998102211.gif) ! Need an 8 GPU machine ? [Here's](https://gitlab.mobilityservices.io/am/roam/perception/cv-ml-resources/wikis/cloud-compute) how to get one !
+
+## Our Dockers ourselves
+Our gitlab repositories have their dedicated docker registry. F.ex on the left side panel (after expanding) you'll see `Registry`. You can [build and push docker](https://gitlab.mobilityservices.io/am/roam/perception/cv-ml-resources/container_registry) images (tagged obviously) to the registry.
+
+For accessing the registry follow the instructions below Here you can find [instructions](https://gitlab.mobilityservices.io/am/roam/perception/cv-ml-resources/container_registry) on accessing gitlab docker registry from you workstation or an AWS machine. You'd need to setup an [personal access token](https://gitlab.mobilityservices.io/help/user/profile/account/two_factor_authentication#personal-access-tokens) and run the following for the first time
+
+Clone this repo
+```
+git clone ssh://git@ssh.gitlab.mobilityservices.io:443/am/roam/perception/cv-ml-resources.git
+git checkout docker-demo
+```
+Get your [personal access token](https://gitlab.mobilityservices.io/help/user/profile/account/two_factor_authentication#personal-access-tokens) and save it at a secure location. You can use this token from your workstation or MacBook. When prompted for password below provide your personal access token.
+```
+docker login registry.mobilityservices.io -u
+```
+
+Build your docker image and push it to our registry as below. You can tag your docker image `latest`. Best practice would be to use tag from the commit history as `commit_tag`.
+```
+# We use the following conventions for our docker containers.
+# docker build -t registry.mobilityservices.io/am/roam/perception//: .
+docker build -t registry.mobilityservices.io/am/roam/perception/cv-ml-resources/docker-demo: .
+docker push docker push registry.mobilityservices.io/am/roam/perception/cv-ml-resources/docker-demo:
+```
+You can see the docker file in our registry [here](https://gitlab.mobilityservices.io/am/roam/perception/cv-ml-resources/container_registry).
+
+## Our [Coding Conventions](http://i.imgur.com/rnU38.gif)
+Within python environment, we ([try to](https://i.imgur.com/W4oNHsD.gif)) follow [PEP8](https://www.python.org/dev/peps/pep-0008/) coding conventions [wherever possible](https://66.media.tumblr.com/tumblr_mdcdp6BhhW1ry1tp9.gif). Easiest way to enforce this into your favourite text editor is to install a PEP8 [package/plugins](https://pypi.org/project/pycodestyle/). Inconsistent coding practices between team members can lead to messy PRs (tabs v spaces etc) and [code reviews](https://66.media.tumblr.com/tumblr_meqd87LELl1ry1tp9.gif). Here are the detailed [`guidelines`](https://gitlab.mobilityservices.io/am/roam/perception/team-space/cv-ml-resources/wikis/Python-coding-conventions)
+
+## Our NAS storage
+We have our own Network Attached Storage (NAS). This is one stop location for accessing all the drive data, annotations, results, experiments etc. You have an account to mount the drive onto your workstation and/or MacBook. You'd need to connect to our dedicated Wifi: `DAS_NW`. Pwd : `N85qaDL9uXeKm5RG26m7TZNdErSLSb2ZB73T`
+
+For Macbook :
+Connect to our NAS with a ethernet cable.
+```
+Finder -> Network -> perception_nas -> connect (top left) -> Enter credentials and wait a bit
+```
+
+From Linux (CLI Only) : Follow the following [steps](https://gitlab.mobilityservices.io/am/roam/perception/team-space/cv-ml-resources/wikis/Workstation-installation#mounting-nas)
+
+You can also mount the NFS share from the command line; if you do this you are on your own.
+
+
+## Our Roadmap
+You can find our mission statement and roadmap [here](https://gitlab.mobilityservices.io/am/roam/perception/team-space/perception/blob/team/README.md)
+## Your first day
+* [ ] Team Photo : Add your photo above
+* [ ] Gitlab : Get access ping @peter
+* [ ] Meet the team : #das-perception slack channel
+* [ ] Workstation Setup : Login and setup
+* [ ] AWS Setup : Credentials + your EBS storage
+
+## Your first week
+* [ ] Intro to Drive Data catalogue
+* [ ] Aloha @ OKRs : Align progress
+* [ ] Hey - Your first task :)
+* [ ] Meet and Greet - Mapping
+* [ ] Say Hi to Routing
+* [ ] Salut Simulation
+
+## Your first month
+* [ ] Solve Computer Vision :dancer\_tone2:
+* [ ] Help us improve the on-boarding wiki :)
\ No newline at end of file
diff --git a/Presentations.md b/Presentations.md
new file mode 100644
index 0000000..1953fd3
--- /dev/null
+++ b/Presentations.md
@@ -0,0 +1,3 @@
+Place for all perception internal & external presentation
+
+[2019-07-23_Product_Review_Perception.pptx](uploads/cecd5af9a389104a810086220b45c99c/2019-07-23_Product_Review_Perception.pptx)
\ No newline at end of file
diff --git a/Python-coding-conventions.md b/Python-coding-conventions.md
new file mode 100644
index 0000000..562d744
--- /dev/null
+++ b/Python-coding-conventions.md
@@ -0,0 +1,148 @@
+# Introduction
+
+Hi there ! Glad you could make time for this topic. We know this is an overhead we could avoid discussing but we have found it to save us many hours on merge request, discussing formatting & ensue code quality. Within python :snake: environment, we ([follow](https://i.imgur.com/W4oNHsD.gif)) [PEP8](https://www.python.org/dev/peps/pep-0008/) coding conventions. Easiest way to enforce this into your favourite text editor is to install a PEP8 [package/plugins](https://pypi.org/project/pycodestyle/).
+
+# Table of Contents
+1. [Tabs v Spaces](#tabs-v-spaces)
+2. [Maximum line length](#maximum-line-length)
+3. [Module imports](#module-imports)
+4. [Function indent](#function-indent)
+5. [Blank lines](#blank-lines)
+6. [Whitespaces in expressions](https://www.python.org/dev/peps/pep-0008/#whitespace-in-expressions-and-statements)
+7. [Function & variable names](https://www.python.org/dev/peps/pep-0008/#function-and-variable-names)
+8. [Constants](https://www.python.org/dev/peps/pep-0008/#constants)
+9. [Class Names](https://www.python.org/dev/peps/pep-0008/#id41)
+10. [Avoid names with](https://www.python.org/dev/peps/pep-0008/#names-to-avoid)
+11. [Naming style](https://www.python.org/dev/peps/pep-0008/#descriptive-naming-styles)
+12. [In-line comments](#in-line-comments)
+
+## Tabs v Spaces
+**Tabs are 2 spaces**. After much deliberation we decided something everyone disagrees on. For compatibility please change your editor settings to follow this convention.
+
+`Good`
+```
+def enter_club(club='berghain'):
+
+ # try
+ return denied
+
+```
+`No Good`
+```
+def enter_club(club='berghain'):
+
+ # try
+ return denied
+
+```
+## Maximum line length
+**Maximum line length is set to 80**. This allows wed editors/local editors to compare files side-by-side when doing to MR/PR. Also recommended in [PEP8](https://www.python.org/dev/peps/pep-0008/#maximum-line-length).
+
+`Good`
+```
+def never_gonna(give_you='up', let_you='down', run='around', desert='you',
+ make_you='cry', say='goodbye'):
+ return troll
+
+```
+`No Good`
+```
+def never_gonna(give_you='up', let_you='down', run='around', desert='you', make_you='cry', say='goodbye'):
+ return troll
+
+```
+
+## Imports
+**All imports are [absolute](https://www.python.org/dev/peps/pep-0008/#imports)**. Organise imports in the following order at the top your python file.
+1. Standard library imports.
+2. Related third party imports.
+3. Local application/library specific imports.
+
+`Good`
+```
+import os
+import sys
+import collections
+
+import tqdm
+
+from planet import Earth as home
+from planet.Earth import dogs as friends
+```
+
+`NO Good`
+```
+import os, sys, collections
+
+import tqdm
+from planet import Earth as home
+from planet.Earth import dogs as friends
+```
+## Hanging indent
+It's clean and it's recommended to follow, separating function name and input variables. This holds for function definition and function calls.
+
+`Good`
+```
+wimbaway = in_the_jungle(the_quite, jungle,
+ the_lion, sleeps_tonight)
+```
+`NO Good`
+```
+wimbaway = in_the_jungle(the_quite, jungle,
+ the_lion, sleeps_tonight)
+```
+
+## Blank lines
+Surround top-level function and class definitions with two blank lines. Method definitions inside a class are surrounded by a single blank line. Use blank lines in functions, sparingly, to indicate logical sections.
+
+```
+def bless_the_rains(down_in_africa=True):
+
+ # Black as a pit from pole to pole
+
+
+def wake_me_up(when_september_ends=True):
+
+ # of August
+
+```
+
+```
+class Humans(object):
+
+ def __init__(self, no_super_powers=True):
+
+ self. no_super_powers = no_super_powers
+
+
+class XMen(object):
+
+ def __init__(self, prof_xavier=None):
+
+ self.xavier = prof_xavier
+ self.magneto = magneto
+
+ def _is_wolverine_alive(self):
+
+ return True
+
+ def _are_mutants_banned(self):
+
+ return True
+
+```
+## In-line comments
+
+`Good`
+```
+# This is a hack
+w -= (gradient_f(x, w) * learning_rate)
+```
+`No good`
+```
+w -= (gradient_f(x, w) * learning_rate) # This is a hack
+```
+Unless it helps (still limiting to 80 chars max length)
+```
+w -= (gradient_f(x, w) * learning_rate) # comp grads and update weights
+```
\ No newline at end of file
diff --git a/Workstation-installation.md b/Workstation-installation.md
new file mode 100644
index 0000000..da4622e
--- /dev/null
+++ b/Workstation-installation.md
@@ -0,0 +1,220 @@
+# Installation
+You can skip to the next step if you already have your account setup. Attach power to your workstation & hook it up with the ethernet cable. Boot your machine for the first time and you'd be asked to make an account. Select your username/password & machine name. You can use the machine name later to ssh into your workstation from your MacBook | Linux machine. Logout from OEM temporary account and login with your username / password. Check if your network is up
+```
+ping 8.8.8.8
+```
+Update & upgrade your package manager
+```
+sudo apt update
+sudo apt-get -y upgrade
+```
+
+## Libs
+Install the following useful libs and if you see something missing please do add them here.
+```
+sudo apt-get install libcurl4-openssl-dev
+sudo apt-get install libssl-dev
+sudo apt install -y libkrb5-dev
+sudo apt install -y libcublas-dev
+sudo apt-get install libavutil-dev
+sudo apt-get install libavcodec-dev
+sudo apt-get install libavcformat-dev
+sudo apt-get install libswscale-dev
+sudo apt install cifs-utils
+sudo apt-get install nfs-common
+sudo apt install screen
+sudo apt install awscli
+sudo apt install htop
+```
+
+## SSH server
+To be able to SSH into you machine you'd need to run the following on your workstation.
+```
+sudo apt install ssh
+sudo systemctl enable ssh
+sudo systemctl start ssh
+sudo systemctl status ssh
+sudo ufw allow ssh
+```
+
+## Mounting storage
+Your workstation has a 1T SSD (on which ubuntu is installed) & a 2T HDD which needs to mounted before you can use it.
+```
+sudo mkdir /data
+sudo chown : -R /data
+sudo mount /dev/sda /data
+df -h
+```
+
+## Mounting NAS
+```
+sudo mkdir /nas
+sudo chown : -R /nas
+sudo mount.cifs //perception_nas.local/data /nas -o user=,vers=1.0
+# when prompted for password provide password
+ls -a /nas
+```
+We haven't been able to successfully do this step. `Please help!`
+
+## Setting up python/ipython
+Install the minimal Python/IPython. Our recommendation is Python3.6.5 or later.
+
+```
+sudo apt-get install -y python3-pip
+sudo apt-get install -y ipython3
+sudo ln -s /usr/bin/python3 /usr/bin/python
+sudo ln -s /usr/bin/ipython3 /usr/bin/ipython
+sudo ls -s /usr/bin/pip3 /usr/bin/pip
+pip install -U pip
+```
+Optional : Install the following [useful python packages](https://gitlab.mobilityservices.io/am/roam/perception/data-catalogue/blob/master/requirements.txt)
+```
+curl -O https://gitlab.mobilityservices.io/am/roam/perception/data-catalogue/blob/master/requirements.txt
+pip install -r requirements.txt
+```
+
+## Setting up Conda/Miniconda
+Use virtual environments or docker images to keep your sanity.
+
+```
+cd ~/Downloads
+curl -O https://repo.anaconda.com/archive/Anaconda3-2019.03-Linux-x86_64.sh
+sudo bash Anaconda3-2019.03-Linux-x86_64.sh
+echo -e "\n# export local Conda binaries\nexport PATH=\$HOME/anaconda3/condabin:\$PATH" >> $HOME/.profile
+```
+
+## Identity file
+You'd need to copy your RSA keys `~/.ssh/id_rsa.pub` to your workstation & add it to `~/.ssh/authorized_keys` to be able to ssh into the workstation without typing your password every time. Your RSA key is your signature use one consistent key for all logins (f.ex Gitlab) to keep sanity.
+
+```
+ssh-copy-id -i ~/.ssh/id_rsa user@hostname
+```
+
+## Disable Password-based SSH Logins
+
+Once your ssh key is on the machine disable password-based ssh logins for security reasons.
+
+In `/etc/ssh/sshd_config` change `PasswordAuthentication` to `no` and then restart the openssh-server via `sudo service ssh restart`.
+
+## TensorFlow
+Your workstations comes with CUDA 10.1 pre-installed. CUDA 10.1 is not formally supported on TF for Ubuntu 18.04 :see_no_evil: so you'd need to run the following in a Conda environment to sleep peacefully and use TensorFlow. More information [here](https://github.com/tensorflow/tensorflow/issues/26182) and some [benchmarks](https://github.com/tensorflow/benchmarks) & [here](https://github.com/IntelAI/models/tree/master/benchmarks)
+```
+conda create -n me-working
+conda activate me-working
+conda install cudatoolkit
+conda install cudnn
+conda install tensorflow-gpu
+```
+
+Alternatively [here](https://dmitry.ai/t/topic/33) is recipe for downgrading to CUDA 10.0:
+
+```
+apt-get --purge remove "*cublas*" "cuda*"
+
+reboot
+
+wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
+dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
+sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
+apt install cuda-10-0
+
+reboot
+```
+
+
+## Docker it up
+
+It's best to package up our apps and all its dependencies into a self-contained and reproducible docker image. For some example Dockerfiles and a convenience Makefile see e.g. [this project](https://gitlab.mobilityservices.io/am/roam/perception/prototypes/semantic-frame-index).
+
+To be able to run docker without root privileges add yourself to the docker group and log out
+
+ sudo usermod -aG docker $USER
+
+Then try
+
+ docker run -it --rm hello-world
+
+
+To push images
+
+ docker image save das/sfi:gpu | bzip2 | ssh user@workstation 'bunzip2 | docker image load'
+
+## Accessing workstation from home
+
+On the workstation, get the shared ec2 ssh key. Then open a reverse ssh gateway to the ec2 instance.
+Once the reverse ssh gateway is established traffic to the ec2 instance gets tunnled to your workstation allowing you to ssh into it from the outside.
+
+```
+# ssh into your workstation
+ssh @
+# get the shared ec2 key once and set permissions
+cp /nas/team-space/AWS/mirco-instance/ec2-dl-playground.pem.txt ~/.ssh/id_rsa_ec2
+chmod 400 ~/.ssh/id_rsa_ec2
+```
+
+Then for establishing the remote ssh gateway:
+
+```
+# ssh into your workstation
+ssh @
+# start a screen session
+screen -S remote-forward
+# remote forward to our dedicated micro ec2 instances
+# chose port-number from 5001 to 5010 (see port list below)
+ssh -R 5xxx:localhost:22 -o ServerAliveInterval=60 -i ~/.ssh/id_rsa_ec2 ec2-user@63.34.172.253
+# leave screen session
+# ssh out of workstation
+```
+
+Now to connect to your workstation from the outside (e.g. from home)
+
+```
+ssh -p 5xxx -i @63.34.172.253
+```
+
+or for example for tunneling port 6006 (for tensorboard) from your workstation to your laptop
+
+```
+ssh -p -L localhost:6006:localhost:6006 -i @63.34.172.253
+```
+
+### Port & Hostnames
+Assigned port list, user & hostnames (only available from `DAS_NW`)
+
+Port | User | Hostname
+:-------------------------:|:-------------------------:|:-------------------------
+5004 | Mo | mo-workstation
+5005 | Dani | glados
+5007 | Nico | marvin
+5008 | Ilan | ilan-kingdom
+5009 | Harsi | walle
+5006 | Emna | hal
+
+## Useful tools
+
+Install following tools which are handy with GPU
+```
+sudo apt install cuda-nvml-dev-10-1
+```
+1. [nvtop](https://github.com/Syllo/)
+
+## Perception team account
+If your machine is idle (you are on [vacation](http://forgifs.com/gallery/d/80770-5/Kayak-pool-dive-fail.gif) or busy reading up [papers](https://66.media.tumblr.com/tumblr_m5ezckIAL81r2hf4ro1_500.gif)). To allow your peers to use your compute please make a `perception` account as follow
+
+```
+# ssh into your workstation
+sudo adduser perception
+sudo adduser -G docker perception
+sudo mkdir /home/perception/.ssh
+sudo chmod 0700 /home/perception/.ssh
+sudo cp ~/.ssh/id_rsa.pub /home/perception/.ssh/authorized_keys
+sudo chmod 0600 /home/perception/.ssh/authorized_keys
+sudo chown -R perception /home/perception
+```
+
+Ask your peers for their public keys and add them to authorised keys as follow
+
+```
+# ssh out of your workstation
+ssh-copy-id -i "" perception@
+```
\ No newline at end of file
diff --git a/cloud-compute.md b/cloud-compute.md
new file mode 100644
index 0000000..b70b6e2
--- /dev/null
+++ b/cloud-compute.md
@@ -0,0 +1,36 @@
+Here you will find instructions on how to boot-up a machine with a cloud compute vendor. We currently use AWS as our cloud compute provider. First you'd need AWS credentials. Ping your buddy to set them up for you. Once you have logged into AWS console follow the instructions below
+
+1. Allocate [EBS storage](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-volume.html) for your machine. EBS is your personal storage which you can mount to any machine you boot now and in the future without losing your work.
+
+2. Boot up a GPU machine you need following the instructions [here](https://aws.amazon.com/getting-started/tutorials/get-started-dlami/). AWS will ask you to create a new key-pair for SSH logins into your machine. Keep it handy for future or reuse if you already have one.
+
+3. [Attach](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html) your EBS storage on your machine. [Mount](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html) your EBS volume
+
+TL;DR version
+```
+ssh -L localhost:8888:localhost:8888 -i .pem.txt ubuntu@
+# Execute mkfs only for the first time you initialise your storage
+sudo mkfs -t xfs /dev/xvdf
+# here we mount your EBS volume on /data but you can chose
+sudo mkdir /data
+sudo mount /dev/xvdf /data
+df -h | grep /dev/xvd
+/dev/xvda1 75G 75G 71G 94% /
+/dev/xvdf 500G 298G 203G 60% /data
+```
+
+4. To write to our S3 buckets this function comes in handy. Add it to your bashrc file
+```
+setAWSidentity() {
+ export AWS_REGION=eu-west-1
+ export S3_VERIFY_SSL=1
+ export S3_USE_HTTPS=1
+ export S3_ENDPOINT=s3.eu-west-1.amazonaws.com
+ export AWS_SECRET_ACCESS_KEY=`aws configure get aws_secret_access_key --profile $1`
+ export AWS_ACCESS_KEY_ID=`aws configure get aws_access_key_id --profile $1`
+}
+source ~/.bashrc
+setAWSidentity perception
+```
+
+5. Your machine is ready for use. To prevent loss of work, have your code/data in /data.
\ No newline at end of file
diff --git a/uploads/4bf99d41ff300089a4afcbbf8a76b0a8/cartoon-machine-learning-what-they-think.jpg b/uploads/4bf99d41ff300089a4afcbbf8a76b0a8/cartoon-machine-learning-what-they-think.jpg
new file mode 100644
index 0000000..588f3c6
Binary files /dev/null and b/uploads/4bf99d41ff300089a4afcbbf8a76b0a8/cartoon-machine-learning-what-they-think.jpg differ
diff --git a/uploads/66cd27a1214e214a1c7b84ace09f883c/Screenshot_2019-06-19_at_10.53.32.png b/uploads/66cd27a1214e214a1c7b84ace09f883c/Screenshot_2019-06-19_at_10.53.32.png
new file mode 100644
index 0000000..e51e793
Binary files /dev/null and b/uploads/66cd27a1214e214a1c7b84ace09f883c/Screenshot_2019-06-19_at_10.53.32.png differ
diff --git a/uploads/6a5cb6c40fdbf53ba7947a22ca221afa/Screenshot_2019-06-17_at_11.10.03.png b/uploads/6a5cb6c40fdbf53ba7947a22ca221afa/Screenshot_2019-06-17_at_11.10.03.png
new file mode 100644
index 0000000..2f078ec
Binary files /dev/null and b/uploads/6a5cb6c40fdbf53ba7947a22ca221afa/Screenshot_2019-06-17_at_11.10.03.png differ
diff --git a/uploads/bd7370acaed008c8b0f9d43d41058829/machine_learning_2x.png b/uploads/bd7370acaed008c8b0f9d43d41058829/machine_learning_2x.png
new file mode 100644
index 0000000..b266cb8
Binary files /dev/null and b/uploads/bd7370acaed008c8b0f9d43d41058829/machine_learning_2x.png differ
diff --git a/uploads/e185205143b10270d4841784e50fe8b9/Screenshot_2019-06-19_at_11.02.24.png b/uploads/e185205143b10270d4841784e50fe8b9/Screenshot_2019-06-19_at_11.02.24.png
new file mode 100644
index 0000000..a8221df
Binary files /dev/null and b/uploads/e185205143b10270d4841784e50fe8b9/Screenshot_2019-06-19_at_11.02.24.png differ