Posts in gsoc

Google Summer of Code Final Work Product

Name: Iñigo Tellaetxe Elorriaga

Read more ...


Google Summer of Code Final Work Product

Name: Kaustav Deka

Read more ...


My Journey Continues: Week 12 Progress with DIPY

Hello everyone! We’ve reached Week 12, the final week of my GSoC journey with DIPY. It’s been an incredible experience, and I’m excited to share my progress and reflections from this week.

Read more ...


Week 12 into GSoC 2024: Last weeks of the coding phase and admin stuff

This week I spent more time fixing and enhancing the adversarial AutoEncoder (AAE) implementation, as well as writing the final GSoC 2024 post. I also opened a draft PR to include in the final post and have a tangible place to publish my work as a PR.

Read more ...


My Journey Continues: Week 11 Progress with DIPY

Hello everyone! Week 11 has been another week of progress, although it came with its own set of challenges. I’ve been working on Docker, making updates to the tutorial fixes, and I also have some exciting news on the personal front. Let me take you through the highlights of this week.

Read more ...


Week 11 into GSoC 2024: The Adversarial AutoEncoder

This week was all about learning about adversarial networks, attribute-based latent space regularization in AutoEncoders, and fighting with Keras and TensorFlow to implement the adversarial framework. It was a bit (or two) challenging, but I managed to do it, thanks to a very nice and clean implementation I found, based on the original adversarial AutoEncoders paper.

Read more ...


My Journey Continues: Week 10 Progress with DIPY

Hello everyone! Week 10 has been a challenging one, with a lot happening both in my personal life and with the DIPY project. Unfortunately, I wasn’t able to make as much progress as I had hoped, but I still managed to get some important work done. Let me walk you through what I accomplished this week.

Read more ...


Week 10 into GSoC 2024: Validating the conditional VAE results

During this week I focused on validating the results of the conditional VAE (cVAE) that I implemented and experimented with last week.

Read more ...


My Journey Continues: Week 9 Progress with DIPY

Hello everyone! It’s time for another update on my progress. Week 9 has been a blend of learning, preparation, and a bit of personal work as I continue my journey with the dipy.org project. This week, I focused on diving into Docker, an essential tool for the upcoming tasks in our project. Let me take you through what I accomplished and learned over the past few days.

Read more ...


Week 9 into GSoC 2024: The Conditional VAE implementation

This week was a bit shorter than usual because Thursday was a holiday in the Basque Country, and today we had an outdoor activity with my lab mates (we went kayaking to the Urdaibai Biosphere Reserve). Nevertheless, it was full of advances and interesting scientific matters.

Read more ...


My Journey Continues: Week 8 Progress with DIPY

Hello everyone! Time for another week of progress. This week has been particularly productive as I tackled several important issues in the dipy.org project and implemented an enhancement suggested by my mentor. Let me walk you through the details of my work.

Read more ...


Week 8 into GSoC 2024: Further advances with the VAE model

This week I continued training the VAE model with the FiberCup dataset, this time for 120 epochs, and the results are promising. The model is able to reconstruct the input data with a decent level of detail.

Read more ...


My Journey Continues: Week 7 Progress with DIPY

Greetings, everyone! The seventh week of GSOC has been a fruitful and learning one.

Read more ...


Week 7 into GSoC 2024: Starting to see the light at the end of the VAE

Finally, I figured out how to solve the nan value problem in the VAE training. As I suspected, the values that the ReparametrizationTrickSampling layer was getting were too big for the exponential operations. I use the exponential operation because I am treating the Encoder output as the log variance of the latent space distribution, and for sampling we need the standard deviation. We use the log variance instead of the standard deviation for avoiding computing logarithms.

Read more ...


My Journey Continues: Week 6 Progress with DIPY

Greetings, everyone! The sixth week of GSOC has been a hectic one. A ton of time in correcting errors and fixing PRs.

Read more ...


Week 6 into GSoC 2024: Stuck with the Variational AutoEncoder, problems with Keras

This week was all about the Variational AutoEncoder. My mentors advised me to drop the TensorFlow implementation of the regression VAE I found last week, to instead directly integrate the variational and conditional characteristics in my AE implementation, following a more modular approach. This was a good decision, as adapting third party code to one’s needs is often a bit of a mess (it already started being a mess, so yeah). Also, once the variational part is done, implementing the conditional should not be that hard.

Read more ...


My Journey Continues: Week 5 Progress with DIPY

Hello everyone, I hope this update finds you well. The fifth week of my Google Summer of Code (GSoC) journey with DIPY has been a bit different from the previous ones, and I wanted to share an honest update about my progress and plans.

Read more ...


Week 5 into GSoC 2024: Vacation, starting with the conditional AutoEncoder

Hi everyone! This week I have been on vacation, so I have not been able to work on the project as much as the previous weeks. However, I have been thinking about the next steps to take and I have decided to start with the conditional AutoEncoder. I have been reading some papers and I have found some interesting ideas that would be nice to implement.

Read more ...


My Journey Continues: Week 4 Progress with DIPY

Hello everyone, I hope this update finds you well. The fourth week of GSOC has made a little slow progress.

Read more ...


Week 4 into GSoC 2024: Weight transfer experiments, hardships, and results!

Well, this week was really intense. I spent most of the time trying to transfer the weights from the pre-trained PyTorch model of the TractoInferno dataset to the Keras model. I must say that thanks to the reduced size of the AutoEncoder, it was feasible to do it layer by layer without going crazy.

Read more ...


My Journey Continues: Week 3 Progress with DIPY

Greetings, everyone! The third week of the Coding phase has been a whirlwind of progress. I have achieved significant milestones in both the decorator implementation and lazy loading integration tasks, bringing us closer to enhancing DIPY’s performance and efficiency.

Read more ...


Third Week into GSoC 2024: Replicating training parameters, approaching replication

This week was slightly less productive because I was really busy with my PhD tasks, but I managed to progress nevertheless. After implementing custom weight initializers (with He Initialization) for the Dense and Conv1D layers in the AutoEncoder (AE), I launched some experiments to try to replicate the training process of the original model. This yielded better results than last week, this time setting the weight decay, the learning rate, and the latent space dimensionality as shown in the FINTA paper. Now the AE has no problem learning that the bundles have depth, and the number of broken streamlines decreased a lot compared to the previous results. I also worked on trying to monitor the training experiments using TensorBoard, but I did not succeed because it was a last minute idea and I did not have time to implement it properly.

Read more ...


My Journey Continues: Week 2 Progress with DIPY

Greetings, everyone! It’s time for another update on my Google Summer of Code (GSoC) journey with DIPY. The second week of the Coding phase has been equally productive and exciting, with significant advancements in both tasks.

Read more ...


Second Week into GSoC 2024: Refactoring the AutoEncoder, preliminary results

This week I refactored the AutoEncoder code to match the design patterns and the organization of other Deep Learning models in the DIPY repo; and to make the training loop more efficient and easy to use. I transferred my code to a separate repo to keep the DIPY repo clean and to experiment freely. Once the final product is working, I will merge it into DIPY. I also packaged the whole repo so I can use it as a library. Training experiments were run for a maximum of a 150 epochs, with variable results. They are not amazing, but at least we get some reconstruction of the input tracts from FiberCup, which seems to be on the right track. I also implemented training logs that report the parameters I used for training, so I can reproduce the results at any time. This still needs work though, because not all parameters are stored. Need to polish! The left image shows the input tracts, and the middle and right images show two reconstructions from two different training experiments.

Read more ...


My Journey Continues: Week 1 Progress with DIPY

Hello everyone, I am back with another update on my Google Summer of Code (GSoC) journey with DIPY. The Community Bonding period has come to an end, and I am now fully immersed in the Coding phase of the project.

Read more ...


First Week into GSoC 2024: Building the AutoEncoder, writing the training loop

I finished becoming familiar with the TensorFlow + Keras basics and I wrote the training loop and a couple of scripts for instantiating and training the AutoEncoder. Data loading was also addressed and I am able to load the data from the FiberCup dataset in .trk format using NiBabel, transform it into NumPy arrays, and feed it into the network.

Read more ...


My Journey Begins: Community Bonding Period with DIPY

Hello everyone, I am thrilled to share that I have been selected as a Google Summer of Code (GSoC) student for 2024. Over the summer, I will be working with DIPY, and I am incredibly excited about the journey ahead.

Read more ...


Community Bonding Period Summary and first impressions

Hi everyone! I am Iñigo Tellaetxe Elorriaga, BSc in Biomedical Engineering and MSc in Biomedical Technologies in Mondragon Unibertsitatea, Basque Country. I am a first year PhD student in the Computational Neuroimaging Laboratory in the Biobizkaia Health Research Institute, also in the Basque Country. In the lab, our main paradigm is brain connectivity, so I am familiar with diffusion MRI and tractography. My main lines of research are brain aging, age modelling, and neurorehabilitation, all in the presence of neurodegenerative diseases and acute brain injuries. As of my programming skills, I am mainly a Python developer and I am one of the main contributors to the ageml library, which we are developing at our lab as part of my PhD thesis. I also worked in the industry as a research engineer in the field of medical computer vision for Cyber Surgery, developing new methods to generate synthetic CT images from MRI for reducing ionizing radiation in spinal surgery patients, using generative diffusion models. I have been using DIPY for a while now for my research and other projects, so I am obviously really excited to contribute to the project this summer.

Read more ...


Doing Final Touch-Ups: Week 14

This week I fixed the test for the isotropic source of kurtosis so now it’s working for all DTDs. I also created tests for the K_micro function. Initially while running the test, I got some errors which made me look deeper into the actual function. The error was that I was doing sqrt of some elements when actually I was supposed to calculate square of them. Also I was using a ‘1/5’ factor which was not actually required. On fixing these issues, the overall map image of K_micro improved significantly. Previously the multi-voxel test case was failing due to different eigenvalues in the isotropic total diffusion tensors simulations. Removing the eigenvalue assertion made the test pass, as verifying the kt, cvt, and evals values sufficed. I also provided documentation to some functions in the test file such as _perpendicular_directions_temp_ and from_qte_to_cti etc. Also had to change the name of some functions to make it sound more relatable to what they were actually doing. This week I almost finished with the CTI tutorial. The only thing remaining is to create a fetcher for the data so that all the users can download the data and use it. Currently the path given for the data retrieval is of my local system. I also added some references and overall improved the wording and information in the tutorial. I also finished up writing my final work report and get it reviewed by my mentors and then updated it. Finally, before pushing the file onto the main PR, I cleaned up the code by removing all the extra comments and some unnecessary code and overall made sure that the entire code was following the pep8 standard.

Read more ...


Writing Tests & Making Documentation: Week 13

This week, I finished writing tests for the sources of kurtosis. While the isotropic source passed the test for the anisotropic DTD, the anisotropic source passed tests for all DTDs. As a result, I integrated the test for the anisotropic source within the test_cti_fits function, eliminating the need for a separate function. I also created tests for multi-voxel cases but the tests passed only for single voxel cases. One reason I think this might be happening is because of the way we’re accessing the covariance and diffusion tensor elements. I intend to look further into this. I worked on the real life data, trying to plot maps, but it didn’t work out because the problem was really related to the fact that the current kurtosis source implementations do not handle multiple voxel cases. I worked on real life data, attempting to plot maps. Even though I was not able to get the desired result, I’m sure I’ll figure it out with further research and possible collaboration.

Read more ...


Finalized experiments using both datasets: Week 12 & Week13

Monai’s VQVAE results on T1-weighted NFBS dataset, 125 samples, for batch size of 5 were qualitatively and quantitatively superior to all previous results. I continued the same experiments on the T1-weighted CC359(Calgary-Campinas-359) public dataset consisting of 359 anatomical MRI volumes of healthy individuals. Preprocessed the data using existing transform_img function -

Read more ...


Week12: Making Test Functions Work

According to last week’s situation, I was trying to make the test_cti_fits function work as well as the test_split_cti_params function. This week I figured out the problem with these functions and was able to fix it. The major problem was occurring while I was trying to compare the parameters. Firstly, I started by removing all the extra fit methods in common_fit_method list such as NLS, CLS and CWLS as I realized that we won’t immediately be needing an extra multi_tensor_fit function in CTI.

Read more ...


Making the Tests Work : Week 11

Previously, I had the function for different sources of kurtosis outside the Fit class. Upon suggestion from my mentor, this week I put them inside the Fit class. This required me to make changes to how certain variables were being called inside those functions. I also had to determine what arguments needed to be passed to those functions.

Read more ...


Carbonate issues, GPU availability, Tensorflow errors: Week 10 & Week 11

Recently, I’ve been an assigned RP(Research Project) account on University of Bloomington’s HPC cluster - Carbonate. This account lets me access multiple GPUs for my experiments in a dedicated account.

Read more ...


Adding Tests : Week 10

Last week, we decided to generate a DTD to make the model more robust. This decision accounted for situations where almost all the parameters were non-zero. However, the signals weren’t matching exactly in that situation. This week, I fixed that issue. We can now safely say that all DTDs will match the ground truth signals, regardless of which parameters are non-zero or what changes we make. We accomplished this by figuring out the correct order of ccti parameters. These are covariance parameters that take the root2 and ‘2’ factor into consideration.

Read more ...


Generating Fit Functions : Week 8 & 9

This week, I started by figuring out how to run Spyder on Ubuntu. After resolving technical problems, I needed to ensure I could edit code to meet pep8 standards but the automatic formatting of code wasn’t working. I made changes in the utils.py file to increase the design matrix readability and fixed a typo in the B[:,3] and B[:, 4] diffusion tensor elements. This is because we realized that the sign needed to be negative to show that it’s representing a signal decay. I implemented mapping all of the covariance parameters from paper to its actual code, creating a need to talk to the original paper’s authors as the conversion shown in the paper didn’t quite match its implementation. I also worked on matching the ground truth signal values in case of anisotropic and combined DTDs. This is because the isotropic DTD signals that were being generated matched exactly the QTI signals, as in case of isotropic we’ve 6 non zero elements, and the rest are 0s. However in anisotropic case we had more non-zero covariance parameters (9 non-zero), similarly as in the case of combined DTD. So we figured out that the non-zero elements are being multiplied to some value which isn’t correct and that this needs modifying the ccti conversion. So, I worked on reading more about voigt notation, as the QTI parameters were implemented using that notation. Then we looked again into the QTI paper, and felt the need to contact their author and code implementer and realized that the coding was done while keeping in mind the voigt notation conversion as well as some other factors. At the end of this we figured out the correct conversion of the ccti parameters. We noticed that some factors needed the (root2) division, while some others needed (2). Therefore, we were able to successfully figure out the correct factors that needed to be multiplied/ divided to each of the covariance parameters. And hence, now the signal values of all the DTDs match as expected. Then the other major ongoing task this week has been the implementation of the Fit class in CTI. This required me to implement some functions which might’ve been implemented in DKI/ QTI. This is an ongoing task and would require more work.

Read more ...


VQVAE MONAI models & checkerboard artifacts: Week 8 & Week 9

We observed in our previous results that the Diffusion Model’s performance may depend on better and effective latents from VQVAE. After playing around with convolutional & residual components in the existing architecture that yielded unsatisfactory results, we decided to move to a more proven model on 3D MRIs. It is not necessary that a model that worked well on MNIST dataset would also deliver similarly on 3D MRI datasets, owing to the differences in complexity of the data distributions. Changing the convolutions to 3D filters alone clearly did not do the job.

Read more ...


Modifying Test Signal Generation

One of the tasks I did this week was modify the cti_design_matrix again, as asked by my mentor to make the code more readable. The initial code was following pep8 standard but it wasn’t very easy to read, but now it is. Also, I realized that the main reason my signals weren’t matching the ground truth values before at all was because the eigenvalues and eigenvectors of the diffusion tensor distribution were wrong. This was because, before I tried getting D_flat by doing: np.squeeze(from_3x3_to_6x1(D)) which returned a tensor of shape ( 6, ). But in this case, it returned the diffusion tensor elements in the order : Dxx, Dyy, Dzz and so on which isn’t the correct format of input expected for the “from_lower_triangular” function. So, initially, we were doing : evals, evecs = decompose_tensor(from_lower_triangular(D_flat)) where the from_lower_triangular function is returning a tensor of shape: (3,3). But then I realized that rather than calculating D_flat, we can simply do: evals, evecs = decompose_tensor(D_flat). Following this approach gave the correct value of “evals and evecs”. So, now we have the correct values of “evals and evecs” which made the signals come closer to the ground truth signals, but we still don’t have the signals completely matching the ground truth signals. Another problem we realized was that while passing “C”, covariance tensor parameters, we needed to make sure that we were passing the modified C parameters, that is “ccti”. This again helped in bringing the signals to the expected values. So, after talking things through with my mentor, and analyzing the QTI paper, we came to a few conclusions which could be done to improve the signal values.

Read more ...


Diffusion Model results on pre-trained VQVAE latents of NFBS MRI Dataset: Week 6 & Week 7

My current code for VQVAE & DM is well tested on MNIST dataset as shown in the previous blog posts. I extended the current codebase for MRI dataset by using 3D convolutions instead of 2D ones, which resulted in 600k parameters for VQVAE for a downsampling factor f=3. I used a preprocess function to transform MRI volumes to the desired shape (128,128,128,1) through DIPY’s reslice and scipy’s affine_transform functions, followed by MinMax normalization. I trained the VQVAE architecture for batch_size=10, Adam optimizer’s lr=2e-4, 100 epochs. I followed suit for downsampling factor f=2 as well and got the following training curves-

Read more ...


Design Matrix Implementation and Coding with PEP8: Week 5

This week, my work focused on two main areas: improving the design matrix and implementing methods under the Fit class in CTI. For the design matrix improvement, I noticed that the design matrix I had previously created was not according to PEP8 standards. After some effort, I managed to modify it to comply with the appropriate format. This week, my time was mostly consumed by implementing methods under the Fit class in CTI. As CTI is an extension of DKI and shares similarities with the QTI model, I had to look into methods already implemented in DKI and QTI. My approach involved going through these two different modules, comparing the methods, and making notes on which ones would need to be implemented in CTI. This was challenging, as CTI’s design matrix is significantly different. Although this implementation is not completely done, I was able to learn a lot.

Read more ...


Creating signal_predict Method: Testing Signal Generation

This week, I worked together with my mentor to come up with a new way of arranging the elements of the design matrix. So, first I rearranged all the parameters of the covariance parameters so that they’d match with the ones in QTI. So now, the order is: the diffusion tensor, the covariance tensor, and then the kurtosis tensors. But then we decided that it would be better to put the kurtosis tensors first because then we wouldn’t have to re-implement all the kurtosis methods again. So, I changed the order of kurtosis and the covariance tensors.

Read more ...


Carbonate Account Setup, Experiment, Debug and Repeat: Week 5

I finally got my hands on IU’s HPC - Carbonate & Big Red 200. I quickly set up a virtual remote connection to Carbonate’s Slate on VS Code with Jong’s help. Later, I started looking up on Interactive jobs on Carbonate to have GPUs on the go for coding and testing. I spent a ton of time reading up on Carbonate’s Interactive SLURM jobs information. Using X11 forwarding, I was able to spin up an interactive job inside the login node using command prompt. It popped up a Firefox browser window from the login node ending up slow and not very user friendly. Same goes for the Big Red 200 as well. Eventually my efforts were in vain and I resorted to installing a jupyter notebook server on my home directory. Although I can’t request a GPU with this notebook, it allows me to debug syntax errors, output visualization, plotting loss values etc.

Read more ...


Re-Engineering Simulation Codes with the QTI Model and Design Matrix

I had to change the cti_test.py file as the signals generated were not exactly correct. I was advised to follow the multiple gaussian signal generation method. While doing this I had to look closely at several already implemented methods and go in depth to understand how those functions were achieving the desired output. The multiple gaussian signal generation method is preferred because the CTI signal generation closely resembles the multiple gaussian signals. We’re using the multiple gaussian signals so that we can have a priori of what to expect from the outcome, if we fit our model to this signal. I also managed to implement the design matrix for the CTI tensor and managed to save it up in the utils.py file. The design matrix is a crucial component of the CTI tensor as it represents the relationships between the different variables in our model. By accurately modeling these relationships, we can generate more realistic simulations and gain a deeper understanding of the CTI tensor. The link of my work: Here <https://github.com/dipy/dipy/pull/2816>__

Read more ...


Diffusion research continues: Week 4

As discussed last week, I completed researching on StableDiffusion(SD). Currently we’re looking for unconditional image reconstruction/denoising/generation using SD. I completed putting together keras implementation of unconditional SD. Since I couldn’t find an official implementation of unconditional SD code, I collated the DDPM diffusion model codebase, VQ-VAE codebase separately.

Read more ...


CTI Simulation and QTI tutorial : Week 3

This week I worked on finishing the simulations with the appropriate documentation. I also worked on creating a general tutorial for CTI/ QTI as one doesn’t already exist for QTI. The idea behind this general tutorial was that there isn’t any tutorial for advanced diffusion encoding. The closest documentation QTI has is here. However, there are several youtube videos. So, in this tutorial we started with simulating qti, and then we make things a little more complex by adding information on CTI as QTI can only handle a single Gradient Table whereas CTI can handle multiple Gradient Tables. This week I also started by initializing cti_tests.py file by adding relevant simulations to it.

Read more ...


VQ-VAE results and study on Diffusion models : Week 3

I continued my experiments with VQ-VAE on MNIST data to see the efficacy of the Prior training in the generated outputs. The output of the encoder for every input image delivers a categorical index of a latent vector for every pixel in the output. As discussed in the previous blog post, the Prior has been trained separately using PixelCNN (without any conditioning) in the latent space.

Read more ...


Signal Creation & Paper Research: Week2 Discoveries

I worked through this research paper, and found some relevant facts to the tasks at hand, such as the different sources of kurtosis. One other important fact I found out was that DDE comprises 2 diffusion encoding modules characterized by different q-vectors (q1 and q2 ) and diffusion times. This fact is important because, CTI approach is based on DDE’s cumulant expansion, and the signal is expressed in terms of 5 unique second and fourth-order tensors. I also found out about how the synthetic signals could be created using 2 different scenarios, which comprises a mix of Gaussian components and a mix of Gaussian and/or restricted compartments. The major time I spent this week was in creating synthetic signals, and therefore in creating simulations.

Read more ...


Deep Dive into VQ-VAE : Week 2

This week I took a deep dive into VQ-VAE code. Here’s a little bit about VQ-VAE -

Read more ...


Community bonding and Project kickstart : Week 1

Community Bonding period ended last week and my first blog is based on the work carried out in the last week. My meeting with GSOC mentors at the start of the week helped me chalk out an agenda for the week. As the first step, I familiarized myself with Tensorflow operations, functions and distribution strategies. My previous experience with PyTorch as well as website tutorials on basic Deep Learning models helped me quickly learn Tensorflow. As the next step, I read VQ-VAE paper & understood the tensorflow open source implementation. VQ-VAE addresses ‘posterior collapse’ seen in traditional VAEs and overcomes it by discretizing latent space. This in turn also improved the generative capability by producing less blurrier images than before. Familiarizing about VQ-VAE early on helps in understanding the latents used in Diffusion models in later steps. I also explored a potential dataset - IXI (T1 images) - and performed some exploratory data analysis, such as age & sex distribution. The images contain entire skull information, it may require brain extraction & registration. It maybe more useful to use existing preprocessed datasets & align them to a template. For next week, I’ll be conducting further literature survey on Diffusion models.

Read more ...


Community Bonding and Week 1 Insights

Hey there! I’m Shilpi, a Computer Science and Engineering undergrad at Dayananda Sagar College of Engineering, Bangalore. I’m on track to grab my degree in 2024. My relationship with Python started just before I started college - got my hands dirty with this awesome Python Specialization course on Coursera. When it comes to what makes me tick, it’s all things tech. I mean, new technology always excites me. Ubuntu, with its fancy terminal and all, used to intimidate me at first, but now, I get a thrill out of using it to do even the simplest things. Up until 2nd year I used to do competitive programming and a bit of ML. But from 3rd year I’ve been into ML very seriously, doing several courses on ML as well solving ML problems on Kaggle. ML is very fun and I’ve done a few project on ML as well. Coding? Absolutely love it. It’s like, this is what I was meant to do, y’know? I got introduced to git and GitHub in my first year - was super curious about how the whole version control thing worked. And then, I stumbled upon the world of open source in my second year and made my first contribution to Tardis: (tardis-sn/tardis#1825) Initially, I intended on doing GSoC during my second year but ended up stepping back for reasons. This time, though, I was fired up to send in a proposal to at least one organization in GSoC. And, well, here we are!

Read more ...


Journey of GSOC application & acceptance : Week 0

While applying for the GSOC 2023 DIPY sub-project titled “Creating Synthetic MRI”, I knew this would be the right one for me for two reasons. Keep reading to know more!

Read more ...


Google Summer of Code Progress August 19

So we are at the end of this awesome summer and this post is about the progress in my final weeks of GSOC 2016! And the major addition in this period is the development stats visualization page.

Read more ...


Google Summer of Code Progress August 7

Yay! We have dynamically generated gallery and tutorials page now!

Read more ...


Google Summer of Code Progress July 22

It has been about 3 weeks after the midterm evaluations. The dipy website is gradually heading towards completion!

Read more ...


Google Summer of Code Progress June 24

This is midterm period and the dipy website has a proper frontend now! And more improvements coming.

Read more ...


Google Summer of Code Progress June 10

It has been about 20 days since the coding period has begun. I have made some decent progress with the backend of the Dipy website. The target that was set according to the timeline of my proposal was setting up an authentication system and login with GitHub in Django along with custom admin panel views for content management.

Read more ...


Google Summer of Code with Dipy

I know I am a bit late with this blogpost, but as you probably guessed from the title I made it into Google Summer of Code 2016!! Throughout this summer I will be working with DIPY under the Python Software Foundation.

Read more ...


Final Project Report

Hi all!

Read more ...


Start wrapping up - Test singularities of kurtosis statistics

As we are reaching the end of the GSoC coding period, I am starting to wrap up the code that I developed this summer.

Read more ...


Attempt to further improve the diffusion standard statistics

The denoising strategy that I used to improve the diffusion standard statistics (see my last post), required the estimation of the noise standard deviation (sigma). As a first approach, I used a simple sigma estimation procedure that was specifically developed for T1-weighted images. Thus, this might not be the most adequate approach for diffusion-weighted images.

Read more ...


Further improvements on the diffusion standard statistics

As I mentioned in my last post, I used the implemented modules to process data acquired with similar parameters to one of the largest worldwide projects, the Human Connectome project. Considering that I was fitting the diffusion kurtosis model with practically no pre-processing steps, which are normally required on diffusion kurtosis imaging, kurtosis reconstructions were looking very good (see Figure 2 of my last post).

Read more ...


Progress Report on Diffusion Kurtosis Imaging (DKI) Implementation

We are almost getting to the end of the GSoC coding period 😭.

Read more ...


Progress Report on Diffusion Kurtosis Imaging (DKI) Implementation

Progress is going as planned in my mid-term summary :).

Read more ...


Perpendicular directions samples relative to a given vector

As I mentioned in the mid-term summary, one of my next steps is to implement some numerical methods to compute the standard kurtosis measures to evaluate their analytical solution.

Read more ...


Artifacts in Dipy’s sample data Sherbrooke’s 3 shells

Hi all,

Read more ...


Mid-Term Summary

We are now at the middle of the GSoC 2015 coding period, so it is time to summarize the progress done so far and update the plan for the work of the second half part of the program.

Read more ...


Progress Report (DKI simulations merged and DKI real data fitted)

I have done great progresses on the 2 last weeks of coding!!! In particular, two major achievements were accomplished:

Read more ...


First report (1st week of coding, challenges and ISMRM conference)

The coding period started in a challenging way.

Read more ...


Time to start mapping brain connections and looking to brain properties in vivo

Hi all,

Read more ...


First post after acceptance! =)

Hi all,

Read more ...