Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOC: Adds Inigo's week 5 short blogpost #45

Merged
merged 3 commits into from
Jul 4, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions posts/2024/2024_06_28_Inigo_week_5.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
Week 5 into GSoC 2024: Vacation, starting with the conditional AutoEncoder
==========================================================================


.. post:: June 28 2024
:author: Iñigo Tellaetxe
:tags: google
:category: gsoc



What I did this week
~~~~~~~~~~~~~~~~~~~~

Hi everyone! This week I have been on vacation, so I have not been able to work on the project as much as the previous weeks. However, I have been thinking about the next steps to take and I have decided to start with the conditional AutoEncoder. I have been reading some papers and I have found some interesting ideas that would be nice to implement.

While stuck at the Munich airport, I started to write some code for this (the weather was not very good and my flight was delayed, so I lost my connecting flight and I had to sleep at the airport). I found an implementation of a regression variational AutoEncoder `in this paper <https://doi.org/10.1007/978-3-030-32245-8_91>`_, where the authors implement a way to manipulate the latent space so that the input data that get projected (streamlines for our case, 3D image patches in the paper) into it are organized along a desired scalar parameter.

I thought this could be a good starting point for my conditional AutoEncoder because it basically provides a way to sample from the latent space in a controlled manner, where you can select the age of the streamlines you want to generate. Also, having the variational regularizes the latent space, so our model is more resilient against overfitting against the identity function, which might happen in "vanilla" AutoEncoders without any regularization.

Also, they provided their code in TensorFlow, so I started adapting it to our use case, which uses 1D convolutions instead of 3D ones.
skoudoro marked this conversation as resolved.
Show resolved Hide resolved

What is coming up next week
~~~~~~~~~~~~~~~~~~~~~~~~~~~

I will continue working on the conditional AutoEncoder next week, but you can see the progress `here <https://github.com/itellaetxe/tractoencoder_gsoc/blob/main/src/tractoencoder_gsoc/models/cvae_model.py>`_.

Did I get stuck anywhere
~~~~~~~~~~~~~~~~~~~~~~~~

The only place I got stuck in was the airport, but thankfully I managed to arrive to my destination, even if my baggage was lost (they delivered it 2 days later, thankfully nothing was missing!)

Until next week!
Loading