Ecuador of my experience researching at Harvard

Ecuador of my experience researching at Harvard
Before entering into the technical aspects of what I have been working on these months, there are some general skills I have gained during these first months at the lab. I have improved my critical reading of publications, learning to determine whether a publication is aligned with my interest and have a better way to get insights from them. I have automated to keep track of what I have learned every day by documenting it by myself. I have taste how innovation is driven in a highly competitive lab full of outstanding researchers and improve my creativity by thinking ideas on how to push current state-of-art topics. Finally, being in the lab I have experienced to contribute to open source project and message directly with authors of the papers where I encountered some concerns, being PyTorch and Facebook AI Research repositories among them.
Furthermore, Harvard has giving me the opportunity to learn a lot of things outside the lab. I have attended courses to reinforce my research (CS281 and CS128). I also joined the Robotics Undergraduate Team participating in the Robocup team to implement the Reinforcement Learning techniques I have been studying into real world applications and recall my knowledge about electronics and mechanics.

My experience at the lab is even more rewarding. During this time, I got a deep understanding of state-of-art architectures of deep networks used in computer vision (VGG, ResNets and DenseNets). I have learned about ensemble learning applied for these deep networks and how this collaboration between models can outperform a single one. In addition, I have learned about knowledge transfer, and how to cheaply transmit what a network has learned into a new network.
Combining these fundamental topics, I have followed my college’s research line, Wasay, focusing the reading on how to accelerate the training of ensembles of deep neural networks and get the benefit of them at no cost. This implies at the same time to look for techniques to accelerate the inference time since ensemble models tends to be bigger than a single network.
Talking about what I have worked on the lab, is particularly counter-intuitive what the previous statement. I have been working on understanding how ensembles can perform with the same training budget as a single network. So far, we are seeing how maintaining the same number of parameters, an ensemble of small networks can train in the same time of a single deeper network and even outperform it. This way we are attempting to determine whether it may not be the only option to go deeper in deep learning, and there are some scenarios where group knowledge (by ensembling networks) may be a better choice.
Finally, I have also been working on a theoretical understanding on whether is more convenient for a network to be shallow, deep or ensemble of networks maintaining constant their capacity. This has drove me learning about loss function space and neural models’ compression.