Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Dec 28, 2023
1 parent 66417ea commit 381e477
Show file tree
Hide file tree
Showing 196 changed files with 63,283 additions and 0 deletions.
4 changes: 4 additions & 0 deletions dipy.org/pull/19/.buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 962379e58cbe8effc22eebd48bc6f206
tags: 645f666f9bcd5a90fca523b33c5a78b7
Empty file added dipy.org/pull/19/.nojekyll
Empty file.
1 change: 1 addition & 0 deletions dipy.org/pull/19/CNAME
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
dipy.org
Binary file added dipy.org/pull/19/_images/DM-MNIST-112epoch.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added dipy.org/pull/19/_images/dm3d-monai-B8-DM500.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added dipy.org/pull/19/_images/vq-vae-results.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added dipy.org/pull/19/_images/vqvae-monai-B12-CC.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added dipy.org/pull/19/_images/vqvae3d-monai-B10.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added dipy.org/pull/19/_images/vqvae3d-monai-B5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added dipy.org/pull/19/_images/vqvae3d-reconst-f2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added dipy.org/pull/19/_images/vqvae3d-reconst-f3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions dipy.org/pull/19/_sources/blog.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
====
Blog
====
19 changes: 19 additions & 0 deletions dipy.org/pull/19/_sources/calendar.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
.. _calendar:

========
Calendar
========

You can stay updated with upcoming DIPY_ events. Checkout our events calendar.

.. raw:: html

<iframe class="calendar" src="https://calendar.google.com/calendar/embed?src=uv8c50fkfvs529837k298ueqh0%40group.calendar.google.com&ctz=America%2FIndiana%2FIndianapolis" title="DIPY Calendar"></iframe>


Get Calendar
--------------
You can also add DIPY_ calendar to your google calendar with this `link. <https://calendar.google.com/calendar/u/0?cid=dXY4YzUwZmtmdnM1Mjk4MzdrMjk4dWVxaDBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ>`_

.. include:: links_names.inc

6 changes: 6 additions & 0 deletions dipy.org/pull/19/_sources/index.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
.. toctree::
:maxdepth: 2
:hidden:

blog
calendar
35 changes: 35 additions & 0 deletions dipy.org/pull/19/_sources/posts/2023/2023_05_19_vara_week0.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
Journey of GSOC application & acceptance : Week 0
=================================================

.. post:: May 19 2023
:author: Vara Lakshmi Bayanagari
:tags: google
:category: gsoc

While applying for the GSOC 2023 DIPY sub-project titled “Creating Synthetic MRI”, I knew
this would be the right one for me for two reasons. Keep reading to know more!

As nervous and not-so-optimistic as I am about applying for academic competitions, I pushed
myself to apply for GSOC out of a necessity for summer job more than anything. This got me out
of my comfort zone and I ventured into open source development. During the time of application
I was a Master’s student from NYU(current status - graduated) with focus on Deep Learning
Applications in Healthcare. I was so involved in research in Computer Vision during school, I
decided to pursue career in the same field going forward. Fortunately, I came across a college
senior’s post on LinkedIn regarding getting accepted as a mentor for GSOC 2023 during that time.
This prompted me to look further into GSOC and its list of projects for this year. I have only
heard of GSOC during my undergrad, during which I never could muster courage to pursue something
outside college. But this time around, I decided to put a confident front and take the leap.

As I searched through the list of available projects, I got iteratively definitive about what I
wanted to work on - looked for python projects first, filtered out machine learning projects next,
narrowed down to a couple of relevant projects. In the process, I came across the list of DIPY
projects. Firstly, I was looking to further my research knowledge in ML by exploring Generative AI.
Secondly, I have worked with MRI datasets in the context of Deep Learning previously, so
‘Creating Synthetic MRI’ project seemed the right fit. These reasons got me hooked to DIPY
sub-organization. I thoroughly enjoyed exploring DIPY applications and began the process for
the application preparation soon. With the wonderful help from the mentors, I successfully submitted
an application, later got an interview call and voila, I got in!

I am very happy about participating in GSOC this year. What started out as a necessity has now become
a passion project. I hope to enjoy the journey ahead, looking forward to learning and implementing few
things along the way!
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
Community Bonding and Week 1 Insights
=====================================

.. post:: May 29 2023
:author: Shilpi Prasad
:tags: google
:category: gsoc


About Myself
~~~~~~~~~~~~

Hey there! I'm Shilpi, a Computer Science and Engineering undergrad at Dayananda Sagar College of Engineering, Bangalore. I'm on track to grab my degree in 2024.
My relationship with Python started just before I started college - got my hands dirty with this awesome Python Specialization course on Coursera.
When it comes to what makes me tick, it's all things tech. I mean, new technology always excites me. Ubuntu, with its fancy terminal and all, used to intimidate me at first, but now, I get a thrill out of using it to do even the simplest things.
Up until 2nd year I used to do competitive programming and a bit of ML. But from 3rd year I've been into ML very seriously, doing several courses on ML as well solving ML problems on kaggle. ML is very fun and I've done a few project on ML as well.
Coding? Absolutely love it. It's like, this is what I was meant to do, y'know? I got introduced to git and GitHub in my first year - was super curious about how the whole version control thing worked. And then, I stumbled upon the world of open source in my second year and made my first contribution to Tardis: (`<https://github.com/tardis-sn/tardis/pull/1825>`_)
Initially, I intended on doing GSoC during my second year but ended up stepping back for reasons. This time, though, I was fired up to send in a proposal to at least one organization in GSoC. And, well, here we are!

Intro to Open-Source and GSoC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

So, I started off finding out about GSoC - how many hours do selected folks put in, the kind of projects people usually tackle, and all that stuff. To get a handle on what they want in a proposal, I turned to some successful ones from previous years. Really gave me an idea of the kind of stuff they expect you to bring to the table.
Trying to find the organization that'd go with my skill set, I stumbled upon Python Software Foundation, and I was like, "This is it!". And under PSF, there was DIPY.
Diving into DIPY's docs was a breeze as they've got it so well put together that I managed to get my head around a completely new topic, "Diffusion MRI", just by going through their introductory docs and a bit of gpt.
While exploring DIPY, I noticed this issue that needed a new feature. It took a good bit of reading to really understand what they were looking for and how to actually build that feature. And then, I submitted my first PR (`check it out here <https://github.com/dipy/dipy/pull/2749>`__)! Getting it merged wasn't exactly easy - there was a lot of room for improvement up in my code, but honestly, I feel like it's all part of the learning curve.
I was a bit of a latecomer to GSoC, so I didn't have much time to make up a ton of PRs. Plus, by the time I'd submitted my first PR, the proposal submission period had already begun. So, I focused all my energy on increasing my knowledge on the topic and polishing my proposal. Plus, I'd wanted to get my proposal reviewed at least once before I submitted it.

Code contributions:

1. [https://github.com/dipy/dipy/pull/2749]

The Day
~~~~~~~

May 4th: I woke up feeling like a nervous wreck. That interview with my organization? Let's just say it didn't go very well. Yet, I couldn't help but hope for the best. The results were supposed to drop at 11:45pm, a moment I wasn't exactly looking forward to.
I tried logging into Google to check, but couldn't. Too many people doing the same thing. I threw my hands up, gave up on the login battle, and got back to work, hoping to distract myself.
Fast forward to 1:30am - I figured by now, the log-in rush should have calmed down. I gave it another shot and... I got in! I clicked over to the dashboard, and there it was. My project. Right there, listed under the Projects section. I had heard that if you get selected, your proposal shows up there.
To confirm that it was actually happening, I picked my phone to check if I'd gotten any official email yet. And yes!! I'd gotten it at 12:49 am. I just hadn't checked.
I whooped, woke up my roomies, rushed to call my parents.
Honestly, words can't even begin to capture how I felt at that moment.
Pure, undiluted joy, that's what it was. My parents, surprisingly actually picked up my call. But the minute I told them I'd made the cut, they congratulated me. It was heck of a day, (^^).

What I did this week
~~~~~~~~~~~~~~~~~~~~

As this was my first week I majorly spent a lot of time knowing about the codebase of the organization. I also went through a couple of research papers of projects which have already been implemented to get information related to my branch.
I'm currently in the middle of reading through the research paper, which is directly related to my project: `here <https://www.sciencedirect.com/science/article/pii/S1053811920300926>`__
I also went through some of the videos related to information on cti, a couple of them are: `this <https://www.youtube.com/watch?v=bTFLGdbSi9M>`__ and also, `this <https://www.youtube.com/watch?v=2WtGl3YQou8&list=PLRZ9VSqV-6srrTAcDh4JYwrlef2Zpjucw&index=16>`__
I also submitted `this <https://github.com/dipy/dipy/pull/2813>`__ PR. In this PR members of my organization are supposed to submit all the
blogs.
But mostly I spent a lot of time in implementing the already existing MultiTensor Simulation on my local system , and also completing the assignment which my mentor gave me.
In this assignment, I was given a specific number of directions, 'n' and some steps on how to produce bvals and bvecs. I had to create ``gtab1`` and ``gtab2``. and then taking ``gtab1`` & ``gtab2`` as input, I was supposed to create a function which would give output btensor i.e btens.
The purpose of this assignment was to strengthen my knowledge on concepts I've already read and also to give me some coding experience, as this is critical in order for me to be able to implement the rest of my project.

What is coming up next Week
~~~~~~~~~~~~~~~~~~~~~~~~~~~

These simulations were basically the first task of the proposal.
So after the btensor I intend on producing the synthetic signals using the qti model (hint on how
it is done in qti tests).
make a figure similar to figure 1 of the 2021 CTI paper:
`here <https://onlinelibrary.wiley.com/doi/full/10.1002/mrm.28938>`__


Did I get stuck anywhere
~~~~~~~~~~~~~~~~~~~~~~~~

I got stuck while creating `this <https://github.com/dipy/dipy/pull/2813>`__ PR. I had to rebase a bunch of commits, and this was a new concept to me, so it took me a while to figure it out. Due to rebasing, I ended up creating a bunch of other commits, which made the commit history of this PR a mess. So, I had to learn about the concept of squashing the commits.

I also got stuck a lot while trying to find out the perpendicular directions to the vectors used in ``gtab1``. I was supposed to implement the following formula:

.. image:: https://github.com/dipy/dipy/blob/09a8c4f8436f995e55231fb3d11fbfe6749610a9/_static/images/formula_.png?raw=true
:width: 400
:alt: formula cti gtab

I had to spend a lot of time figuring out how to combine 3 vectors of shape (81, 3) to get V. And also working on the function which would give me the perpendicular vector to the vector in ``gtab1``.

I got a bunch of ``ValueErrors`` saying: could not broadcast input array from shape (3,3,1) into shape (3,3) and some ``IndexError`` saying: shape mismatch: indexing arrays could not be broadcast together with shapes (81,) (3,1) (3,).

I also had to experiment on how to concatenate different vectors to get the vector of the right shape, since there are a bunch of possible options while stacking, such as vstack, hstack, stack, etc.

26 changes: 26 additions & 0 deletions dipy.org/pull/19/_sources/posts/2023/2023_05_29_vara_week1.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
Community bonding and Project kickstart : Week 1
================================================

.. post:: May 29 2023
:author: Vara Lakshmi Bayanagari
:tags: google
:category: gsoc

What I did this week
~~~~~~~~~~~~~~~~~~~~

Community Bonding period ended last week and my first blog is based on the work carried
out in the last week. My meeting with GSOC mentors at the start of the week helped me chalk
out an agenda for the week. As the first step, I familiarized myself with Tensorflow
operations, functions and distribution strategies. My previous experience with PyTorch as
well as `website tutorials <https://www.tensorflow.org/tutorials/images/cnn>`_ on basic Deep
Learning models helped me quickly learn Tensorflow. As the next step, I read VQ-VAE paper &
understood the tensorflow open source implementation. VQ-VAE addresses 'posterior collapse'
seen in traditional VAEs and overcomes it by discretizing latent space. This in turn also
improved the generative capability by producing less blurrier images than before.
Familiarizing about VQ-VAE early on helps in understading the latents used in Diffusion models
in later steps. I also explored a potential dataset - `IXI (T1 images) <https://brain-development.org/ixi-dataset/>`_
- and performed some exploratory data analysis, such as age & sex distribution. The images contain
entire skull information, it may require brain extraction & registration. It maybe more useful
to use existing preprocessed datasets & align them to a template. For next week, I'll be
conducting further literature survey on Diffusion models.
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
Signal Creation & Paper Research: Week2 Discoveries
===================================================

.. post:: June 05 2023
:author: Shilpi Prasad
:tags: google
:category: gsoc



What I did this week
~~~~~~~~~~~~~~~~~~~~
I worked through this research paper, and found some relevant facts to the tasks at hand, such as the different sources of kurtosis. One other important fact I found out was that DDE comprises 2 diffusion encoding modules characterized by different q-vectors (q1 and q2 ) and diffusion times. This fact is important because, CTI approach is based on DDE's cumulant expansion, and the signal is expressed in terms of 5 unique second and fourth-order tensors. I also found out about how the synthetic signals could be created using 2 different scenarios, which comprises a mix of Gaussian components and a mix of Gaussian and/or restricted compartments.
The major time I spent this week was in creating synthetic signals, and therefore in creating simulations.


What Is coming up next week
~~~~~~~~~~~~~~~~~~~~~~~~~~~
I intend on finishing the simulations with appropriate documentation and theory lines. If time permits, I'll resume working on the cti.py file and its tests section.


Did I get stuck anywhere
~~~~~~~~~~~~~~~~~~~~~~~~
I didn't get stuck, however it did take me a while to go through all the code that I could possibly be needing in my simulations, and also in understanding the theory behind those codes.
33 changes: 33 additions & 0 deletions dipy.org/pull/19/_sources/posts/2023/2023_06_05_vara_week2.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
Deep Dive into VQ-VAE : Week 2
==============================

.. post:: June 05, 2023
:author: Vara Lakshmi Bayanagari
:tags: google
:category: gsoc


What I did this week
~~~~~~~~~~~~~~~~~~~~
This week I took a deep dive into VQ-VAE code. Here's a little bit about VQ-VAE -

VQ-VAE is discretized VAE in latent space that helps in achieving high quality outputs. It varies from VAE by two points - use of discrete latent space, performing separate Prior training. VAE also showed impressive generative capabilities across data modalities - images, video, audio.

By using discrete latent space, VQ-VAE bypasses the 'posterior collapse' mode seen in traditional VAE. Posterior collapse is when latent space is not utilized properly and collapses to similar vectors independent of input, thereby resulting in not many variations when generating outputs.

Encoder, Decoder weights are trained along with L2 updates of embedding vectors. A categorical distribution is assumed of these latent embeddings and to truly capture the distribution of these vectors, these latents are further trained using PixelCNN model.

In the original paper, PixelCNN has shown to capture the distribution of data while also delivering rich detailing in generated output images. In the image space, PixelCNN decoder reconstructs a given input image with varying visual aspects such as colors, angles, lightning etc. This is achieved through autoregressive training with the help of masked convolutions. Auto regressive training coupled with categorical distribution sampling at the end of the pipeline facilitates PixelCNN to be an effective generative model.

A point to be noted here is that the prior of VQ-VAE is trained in latent space rather than image space through PixelCNN. So, it doesn't replace decoder as discussed in the original paper, rather trained independently to reconstruct the latent space. So, the first question that comes to my mind - How does latent reconstruction help in image generation? Is prior training required at all? What happens if not done?

My findings on MNIST data shows that trained prior works well only with a right sampling layer(tfp.layers.DistrubutionalLambda), that helps with uncertainty estimation. Therefore, PixelCNN autoregressive capabilities are as important as defining a distribution layer on top of them. Apart from this, I've also been researching and collating different MRI datasets to work on in the future.

What Is coming up next week
~~~~~~~~~~~~~~~~~~~~~~~~~~~
My work for next week includes checking insights on CIFAR dataset, brushing up on Diffusion Models.

Did I get stuck anywhere
~~~~~~~~~~~~~~~~~~~~~~~~
Working with VQ-VAE code required digging in a little bit before drawing conclusions on results obtained. I reached out to the author of the Keras implementation blog to verify a couple of things. And conducted couple more experiments than estimated and presented the same work at the weekly meeting.

Loading

0 comments on commit 381e477

Please sign in to comment.