Tag Archives: research notes

Paper Notes: The Shattered Gradients Problem ...

paper: https://arxiv.org/abs/1702.08591

The whole heading of the paper is "The Shattered Gradients Problem: If resnets are the answer, then what is the question?". It is really interesting work with all its findings about gradient dynamics of neural networks. It also examines Batch Normalization (BN) and Residual Networks (Resnet) under this problem.

The problem, dubbed "Shattered Gradients", described as gradient feedbacks resembling random noise for nearby data points. White noise gradients (random value around 0 with some unknown variance) are not useful for training and they stall the network. What we expect to see is Brownian noise (next value is obtained with a small change on the last value) from a working model. Deep neural networks are more prone to white noise gradients. However, latest advances like BN and Resnet are described to be more resilient to random gradients even in deep networks.

White noise gradients undermines the effectiveness of networks because they violates gradient based learning methods which expects similar gradient feedbacks for data points close by in the vector space. Once you have white noise gradient for such close points, the model is not able to capture data manifold through these learning algorithms. Brownian updates yields more correlation on updates and this preludes effective learning.

For normal networks, they give a empirical evidence that the correlation of network updates decreases with the order (/2^L where L is number of layers. Decreasing correlation means more white noise gradient feedbacks.

One important reason of white noise feedbacks is to be co-activations of network units. From a working model, we expect to have units receptive to different structures in the given data. Therefore, for each different instance, different subset of units should be active for effective information flux. They observe that as activation goes through layers, co-activation rate goes higher. BN layers prevents this by keeping the co-activation rate 1/4 (1/4 units are active per layer).

Beside the co-activation rate, how dispersed units activation is another important question. Thus, similar instances need to activate similar subset of units and activation should be distributes to other subsets as we change the data structure. This stage is where the skip-connections get into the play. Their observation is skip-connections improve networks in that respect. This can be observed at below figure.

The effectiveness of skip-connections increases with Beta scaling introduced by InceptionV4 architecture. It  is scaling residual connections by a constant value before summing up with the current layer activation.

A small discussion

This is a very intriguing paper to me as being one of the scarse works investigating network dynamics instead of blind updates on architectures for racing accuracy values.

Resnet is known to be train hundreds of layers which was not possible before. Now, with this work, we have another scientific argument explaining its effectiveness. I also like to point Veit et al. (2016) demystifying Resnet as an ensemble of many shallow networks. When we combine both of these papers, it makes total sense to me how Resnets are useful for training very deep networks. If shattered gradient effect, as stated here,  increasing with number of layers with the order 2^L then it is impossible to train hundred layers with an ad-hoc network. Corollary, since Resnet behaves like a ensemble of shallow networks this effects is rehabilitated. We are able to see it empirically in this paper and it is complimentary in that sense.

Note: This hastily written paper note might include any kind of error. Please let me know if you find one. Best 🙂

Selfai: A Method for Understanding Beauty in Selfies

Selfies are everywhere. With different fun masks, poses and filters,  it goes crazy.  When we coincide with any of these selfies, we automatically give an intuitive score regarding the quality and beauty of the selfie. However, it is not really possible to describe what makes a beautiful selfie. There are some obvious attributes but they are not fully prescribed.

With the folks at 8bit.ai, we decided to develop a system which analyzes selfie images and scores them in accordance to its quality and beauty.  The idea was to see whether it is possible to mimic that bizarre perceptual understanding of human with the recent advancements of AI. And if it is, then let's make an application and let people use it for whatever purpose.  For now, we only have an Instagram bot @selfai_robot. You can check before reading.

Adriana Lima images ranked by Selfai model

Continue reading Selfai: A Method for Understanding Beauty in Selfies

ImageNet winners after 2012

  • 2012
    • 0.15 - Supervision (AlexNet) - ~ 60954656 params
    • 0.26 - ISI (ensemble of features)
    • 0.27 - LEAR (Fisher Vectors)
  • 2013
    • 0.117 - Clarifai (paper)
    • 0.129 - NUS (very parametric but interesting method based on test and train data affinities)
    • 0.135 - ZF (same paper)
  • 2014
    • 0.06 - GoogLeNet (Inception Modules) -  ~ 11176896 params
    • 0.07 - VGGnet (Go deeper and deeper)
    • 0.08 - SPPnet  (A retrospective addition from early vision)

Bits and Missings for NNs

  • Adversarial instances and robust models
    • Generative Adversarial Network http://arxiv.org/abs/1406.2661 -  Train classifier net as oppose to another net creating possible adversarial instances as the training evolves.
    • Apply genetic algorithms per N training iteration of net and create some adversarial instances.
    • Apply fast gradient approach to image pixels to generate intruding images.
    • Goodfellow states that DAE or CAE are not full solutions to this problem. (verify why ? )
  • Blind training of nets
    • We train huge networks in a very brute force fashion. What I mean is, we are using larger and larger models since we do not know how to learn concise and effective models. Instead we rely on redundancy and expect to have at least some units are receptive to discriminating features.
  • Optimization (as always)
    • It seems inefficient to me to use back-propagation after all these work in the field. Another interesting fact, all the effort in the research community goes to find some new tricks that ease back-propagation flaws. I thing we should replace back-propagation all together instead of daily fleeting solutions.
    • Still use SGD ? Still ?
  • Sparsity ?
    • After a year of hot discussion for sparse representations and it is similarity to human brain activity, it seems like it's been shelved. I still believe, sparsity is very important part of good data representations. It should be integrated to state of art learning models, not only AutoEncoders.

DISCLAIMER:  If you are reading this, this is only captain's note and intended to my own research make up.  So many missing references and novice arguments.