Deep Learning Blog



Artificial Neural Networks



Artificial Neural Networks are a Deep Learning Tool to solve tasks. They were born decades ago but their popularity has been increased exponentially since last 10 years, specially when Google opened TensorFlow for public use. However, there is something missing in this equation. A generation of Data Scientist has born, using these tools all together to solve their specific problems. To do so, the majority of them rely on those frameworks and 'the magic behind them'. Andrej Kaparthy made a really good point in this article in Medium .

In this work, there is a detailed explanation of this magic that is happening under the hood of the frameworks. We should all know what is going on and don't depend on how other people do things, because only that way we will solve the problems that, for sure, we will encounter.

The work is divided in 4 chapters to follow, where we will split into the smallest componentes our next best friend we can see here:
Find it here

(0) - Give me code!

(1) - Neural Networks - First contact

(2) - Neural Networks - The Big Picture

(3) - Neural Networks - The Graph Approach

(4) - Neural Networks - Understanding BackProp Dimensionality




Convolutional Neural Networks




Tutorial overview with detailed visualizations of the evolution of convolutional neural networks up to the state-of-art. Find it here

Index

1: LeNet - LeCun - 1998 Paper (TBI)

2: AlexNet - Krizhevsky - 2012 Paper

3: GoogLeNet (Inception) - Szegedy - 2014 Paper (TBI)

4: VGG - Simonyan / Zisserman - 2014 Paper (TBI)

5: ResNets - He - 2015 Paper
  - Tutorial on Residual Deep Convolutional Networks (Demistifying ResNet Paper)
  - Tutorial on modifying the ResNet for CIFAR-10
    - PyTorch Implementation Explanation
    - Code for ResNets on CIFAR10
6: DenseNets - Huang - 2016 - Paper
  - Tutorial on Residual Deep Convolutional Networks (Demistifying ResNet Paper)
  - Tutorial on modifying the DenseNet for CIFAR-10
    - Code for DenseNets on CIFAR10

7: MobileNets - Howard - 2016 Paper (TBI)

8: FractalNets - Larsson - 2016 Paper (TBI)

9: SE-Nets - Hu - 2017 Paper




Ensembles of Neural Networks (On going work)


Ensemble learning is a technique of machine learning that consist on the creation of a strong model by making use of multiple models called weak learners; to obtain a prediction accuracy higher than the one that could be obtain by any of the weak learners alone.
This work focuses on ensembles of neural networks, by carefully understanding and nicely visualizing the main research contributions that has been done on this field. Find the landing page here , or go to direct resources:


(1) - Justifying Neural Network Ensembling

  (2) - Ensemble strategies I - Distribution of data (TBI)

(3) - Ensemble strategies II - Outputs aggregation

   (3.1) - Aggregate outputs from individually trained models
   (3.2) - Ensemble awareness during training

  (4) - Ensemble strategies III - Accelerate ensemble training (TBI)

   (4.1) - Snapshots - Train 1, get M for free
   (4.2) - Function preserving transformations (TBI)
    (4.2.1) - Net2Net (TBI)
    (4.2.2) - Netwok Mophism (TBI)




Computer Vision



Tutorial overview with detailed visualizations of the evolution of convolutional neural networks.
Find it here

Index

  - Convolutional Neural Networks
  - Neural Style Transfer
   - Application of NST to make Retiro (Central Park of Madrid, Spain) look like a Fornite videogame Screenshot!
  - Simultaneous Localization and Mapping - SLAM
   - Visual implementation of Graph SLAM to an agent in a 2D environment
  - Scene Understanding
   - Overview of Image Segmentation