Skip to content

Commit

Permalink
FIX: RCorrects bad RST URL syntax & deletes mentions to week 0 work
Browse files Browse the repository at this point in the history
  • Loading branch information
itellaetxe committed Jun 9, 2024
1 parent 81d7cee commit 7483a03
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions posts/2024/2024_06_07_Inigo_week_1.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,7 @@ First Week into GSoC 2024: Building the AutoEncoder, preliminary results

What I did this week
~~~~~~~~~~~~~~~~~~~~
On my last post, I wrote about translating the AutoEncoder architecture from PyTorch to TensorFlow. I have successfully done so, and I have attempted to overfit the model with data from the `FiberCup dataset <https://tractometer.org/fibercup/home/>`__ to check whether the architecture is working as intended, and that the training objective is correctly defined (i.e. the loss function is correctly implemented).
I also refactored the AutoEncoder code to match the design patterns and the organization of other Deep Learning models in the DIPY repo. I transferred my code to a `separate repo <https://github.com/itellaetxe/tractoencoder_gsoc>__` to keep the DIPY repo clean and to experiment freely. Once the final product is working, I will merge it into DIPY. I also packaged the whole repo so I can use it as a library.
This week I refactored the AutoEncoder code to match the design patterns and the organization of other Deep Learning models in the DIPY repo. I transferred my code to a `separate repo <https://github.com/itellaetxe/tractoencoder_gsoc>`_ to keep the DIPY repo clean and to experiment freely. Once the final product is working, I will merge it into DIPY. I also packaged the whole repo so I can use it as a library.
Training experiments were run for a maximum of a 150 epochs, with variable results. They are not amazing, but at least we get some reconstruction of the input tracts from FiberCup, which seems to be on the right track. I also implemented training logs that report the parameters I used for training, so I can reproduce the results at any time. This still needs work though, because not all parameters are stored. Need to polish!
The left image shows the input tracts, and the middle and right images show two reconstructions from two different training experiments.

Expand All @@ -20,7 +19,7 @@ The left image shows the input tracts, and the middle and right images show two

What is coming up next week
~~~~~~~~~~~~~~~~~~~~~~~~~~~
With the help of my mentors, we identified possible improvements to the AutoEncoder training process. Yesterday I investigated how PyTorch weights are initialized in convolutional kernels and in Keras Dense layers using the `He Initialization <https://paperswithcode.com/paper/delving-deep-into-rectifiers-surpassing-human>__`. Using custom initializers, one can mimic the same behavior in TensorFlow, which I started to implement also yesterday.
With the help of my mentors, we identified possible improvements to the AutoEncoder training process. Yesterday I investigated how PyTorch weights are initialized in convolutional kernels and in Keras Dense layers using the `He Initialization <https://paperswithcode.com/paper/delving-deep-into-rectifiers-surpassing-human>`_. Using custom initializers, one can mimic the same behavior in TensorFlow, which I started to implement also yesterday.
This week should focus on trying to reproduce the small implementation differences that might be causing the model to not converge as the PyTorch one. I will also try to finish implementing the He Initialization in TensorFlow.


Expand Down

0 comments on commit 7483a03

Please sign in to comment.