Posts by Iñigo Tellaetxe

Week 12 into GSoC 2024: Last weeks of the coding phase and admin stuff

This week I spent more time fixing and enhancing the adversarial AutoEncoder (AAE) implementation, as well as writing the final GSoC 2024 post. I also opened a draft PR to include in the final post and have a tangible place to publish my work as a PR.

Read more ...


Week 11 into GSoC 2024: The Adversarial AutoEncoder

This week was all about learning about adversarial networks, attribute-based latent space regularization in AutoEncoders, and fighting with Keras and TensorFlow to implement the adversarial framework. It was a bit (or two) challenging, but I managed to do it, thanks to a very nice and clean implementation I found, based on the original adversarial AutoEncoders paper.

Read more ...


Week 10 into GSoC 2024: Validating the conditional VAE results

During this week I focused on validating the results of the conditional VAE (cVAE) that I implemented and experimented with last week.

Read more ...


Week 9 into GSoC 2024: The Conditional VAE implementation

This week was a bit shorter than usual because Thursday was a holiday in the Basque Country, and today we had an outdoor activity with my lab mates (we went kayaking to the Urdaibai Biosphere Reserve). Nevertheless, it was full of advances and interesting scientific matters.

Read more ...


Week 8 into GSoC 2024: Further advances with the VAE model

This week I continued training the VAE model with the FiberCup dataset, this time for 120 epochs, and the results are promising. The model is able to reconstruct the input data with a decent level of detail.

Read more ...


Week 7 into GSoC 2024: Starting to see the light at the end of the VAE

Finally, I figured out how to solve the nan value problem in the VAE training. As I suspected, the values that the ReparametrizationTrickSampling layer was getting were too big for the exponential operations. I use the exponential operation because I am treating the Encoder output as the log variance of the latent space distribution, and for sampling we need the standard deviation. We use the log variance instead of the standard deviation for avoiding computing logarithms.

Read more ...


Week 6 into GSoC 2024: Stuck with the Variational AutoEncoder, problems with Keras

This week was all about the Variational AutoEncoder. My mentors advised me to drop the TensorFlow implementation of the regression VAE I found last week, to instead directly integrate the variational and conditional characteristics in my AE implementation, following a more modular approach. This was a good decision, as adapting third party code to one’s needs is often a bit of a mess (it already started being a mess, so yeah). Also, once the variational part is done, implementing the conditional should not be that hard.

Read more ...


Week 5 into GSoC 2024: Vacation, starting with the conditional AutoEncoder

Hi everyone! This week I have been on vacation, so I have not been able to work on the project as much as the previous weeks. However, I have been thinking about the next steps to take and I have decided to start with the conditional AutoEncoder. I have been reading some papers and I have found some interesting ideas that would be nice to implement.

Read more ...


Week 4 into GSoC 2024: Weight transfer experiments, hardships, and results!

Well, this week was really intense. I spent most of the time trying to transfer the weights from the pre-trained PyTorch model of the TractoInferno dataset to the Keras model. I must say that thanks to the reduced size of the AutoEncoder, it was feasible to do it layer by layer without going crazy.

Read more ...


Third Week into GSoC 2024: Replicating training parameters, approaching replication

This week was slightly less productive because I was really busy with my PhD tasks, but I managed to progress nevertheless. After implementing custom weight initializers (with He Initialization) for the Dense and Conv1D layers in the AutoEncoder (AE), I launched some experiments to try to replicate the training process of the original model. This yielded better results than last week, this time setting the weight decay, the learning rate, and the latent space dimensionality as shown in the FINTA paper. Now the AE has no problem learning that the bundles have depth, and the number of broken streamlines decreased a lot compared to the previous results. I also worked on trying to monitor the training experiments using TensorBoard, but I did not succeed because it was a last minute idea and I did not have time to implement it properly.

Read more ...


Second Week into GSoC 2024: Refactoring the AutoEncoder, preliminary results

This week I refactored the AutoEncoder code to match the design patterns and the organization of other Deep Learning models in the DIPY repo; and to make the training loop more efficient and easy to use. I transferred my code to a separate repo to keep the DIPY repo clean and to experiment freely. Once the final product is working, I will merge it into DIPY. I also packaged the whole repo so I can use it as a library. Training experiments were run for a maximum of a 150 epochs, with variable results. They are not amazing, but at least we get some reconstruction of the input tracts from FiberCup, which seems to be on the right track. I also implemented training logs that report the parameters I used for training, so I can reproduce the results at any time. This still needs work though, because not all parameters are stored. Need to polish! The left image shows the input tracts, and the middle and right images show two reconstructions from two different training experiments.

Read more ...


First Week into GSoC 2024: Building the AutoEncoder, writing the training loop

I finished becoming familiar with the TensorFlow + Keras basics and I wrote the training loop and a couple of scripts for instantiating and training the AutoEncoder. Data loading was also addressed and I am able to load the data from the FiberCup dataset in .trk format using NiBabel, transform it into NumPy arrays, and feed it into the network.

Read more ...


Community Bonding Period Summary and first impressions

Hi everyone! I am Iñigo Tellaetxe Elorriaga, BSc in Biomedical Engineering and MSc in Biomedical Technologies in Mondragon Unibertsitatea, Basque Country. I am a first year PhD student in the Computational Neuroimaging Laboratory in the Biobizkaia Health Research Institute, also in the Basque Country. In the lab, our main paradigm is brain connectivity, so I am familiar with diffusion MRI and tractography. My main lines of research are brain aging, age modelling, and neurorehabilitation, all in the presence of neurodegenerative diseases and acute brain injuries. As of my programming skills, I am mainly a Python developer and I am one of the main contributors to the ageml library, which we are developing at our lab as part of my PhD thesis. I also worked in the industry as a research engineer in the field of medical computer vision for Cyber Surgery, developing new methods to generate synthetic CT images from MRI for reducing ionizing radiation in spinal surgery patients, using generative diffusion models. I have been using DIPY for a while now for my research and other projects, so I am obviously really excited to contribute to the project this summer.

Read more ...