Suppose you have a problem that you like to tackle with machine learning and use the resulting system in a real-life project. I like to share my simple pathway for such purpose, in order to provide a basic guide to beginners and keep these things as a reminder to myself. These rules are tricky since even-thought they are simple, it is not that trivial to remember all and suppress your instinct which likes to see a running model as soon as possible.
When we confronted any problem, initially we have numerous learning algorithms, many bytes or gigabytes of data and already established knowledge to apply some of these models to particular problems. With all these in mind, we follow a three stages procedure;
Selfies are everywhere. With different fun masks, poses and filters, it goes crazy. When we coincide with any of these selfies, we automatically give an intuitive score regarding the quality and beauty of the selfie. However, it is not really possible to describe what makes a beautiful selfie. There are some obvious attributes but they are not fully prescribed.
With the folks at 8bit.ai, we decided to develop a system which analyzes selfie images and scores them in accordance to its quality and beauty. The idea was to see whether it is possible to mimic that bizarre perceptual understanding of human with the recent advancements of AI. And if it is, then let's make an application and let people use it for whatever purpose. For now, we only have an Instagram bot @selfai_robot. You can check before reading.
Continuous distribution on the simplex which approximates discrete vectors (one hot vectors) and differentiable by its parameters with reparametrization trick used in VAE.
It is used for semi-supervised learning.
DEEP UNSUPERVISED LEARNING WITH SPATIAL CONTRASTING
Learning useful unsupervised image representations by using triplet loss on image patches. The triplet is defined by two image patches from the same images as the anchor and the positive instances and a patch from a different image which is the negative. It gives a good boost on CIFAR-10 after using it as a pretraning method.
How would you apply to real and large scale classification problem?
UNDERSTANDING DEEP LEARNING REQUIRES RETHINKING GENERALIZATION
This paper states the following phrase. Traditional machine learning frameworks (VC dimensions, Rademacher complexity etc.) trying to explain how learning occurs are not very explanatory for the success of deep learning models and we need more understanding looking from different perspectives.
They rely on following empirical observations;
Deep networks are able to learn any kind of train data even with white noise instances with random labels. It entails that neural networks have very good brute-force memorization capacity.
Explicit regularization techniques - dropout, weight decay, batch norm - improves model generalization but it does not mean that same network give poor generalization performance without any of these. For instance, an inception network trained without ant explicit technique has 80.38% top-5 rate where as the same network achieved 83.6% on ImageNet challange with explicit techniques.
A 2 layers network with 2n+d parameters can learn the function f with n samples in d dimensions. They provide a proof of this statement on appendix section. From the empirical stand-view, they show the network performances on MNIST and CIFAR-10 datasets with 2 layers Multi Layer Perceptron.
Above observations entails following questions and conflicts;
Traditional notion of learning suggests stronger regularization as we use more powerful models. However, large enough network model is able to memorize any kind of data even if this data is just a random noise. Also, without any further explicit regularization techniques these models are able to generalize well in natural datasets. It shows us that, conflicting to general belief, brute-force memorization is still a good learning method yielding reasonable generalization performance in test time.
Classical approaches are poorly suited to explain the success of neural networks and more investigation is imperative in order to understand what is really going on from theoretical view.
Generalization power of the networks are not really defined by the explicit techniques, instead implicit factors like learning method or the model architecture seems more effective.
Explanation of generalization is need to be redefined in order to solve the conflicts depicted above.
My take : These large models are able to learn any function (and large does not mean deep anymore) and if there is any kind of information match between the training data and the test data, they are able to generalize well as well. Maybe it might be an explanation to think this models as an ensemble of many millions of smaller models on which is controlled by the zeroing effect of activation functions. Thus, it is able to memorize any function due to its size and implicated capacity but it still generalize well due-to this ensembling effect.
A crucial problem in a real DL system design is to capture test data distribution with the trained model which only sees the training data distribution. Therefore, it is always important to find a good data splitting scheme which at least gives the right measures to such divergence.
It is always a waste to spend all your time for fine-tunning your model on the measure of validation data taken from training data only. Because, when you deploy the model, it undergoes new instances sampled from dynamically shifting data distribution. If you have a chance to see some samples from this dynamic environment, use that to test your model on these real instances and keep your model more coherent and don't mislead your training flow.
That being said, on the above figure, the second row depicts the right way to choose your data split. And the third row shows the smoothed version which is suggested in practice.
Above figure shows common machine learning problems in relation to different components of your work flow. It is really important to understand what is really said here and what these problems explain.
Bias is the quality of your model on training data. If it predicts wrong on training, it has a "Bias" problem. If you have a good performance on training data but not on validation data, it yields "Variance" problem. If performance differs for validation data taken from training set and test set, it is "Train - Test mismatch". If performance suffers due to distribution shift on test time, it is "Overfitting".
Bias requires better architecture and longer training. Variance needs more data and regularization. Train - Test mismatch needs more training data from distribution similar to your test data. Overfitting needs regularization, more data, and data synthesis effort.
Above chart shows a salient way of conducting DL system evolution. Follow these decisions with empirical evidences and don't skip any of these in order not to be disappointed in the end. (I said it with many disappointments 🙂 )
When we see that train, validation errors are close enough to human level performance, it means more variance problem and we need to collect more data similar to test portion and hurdle more data synthesis work. Train and validation errors far from human level performance is the sign of bias problem, requires larger models and more training time. Keep in mind that, human performance is not the limit of what your model is theoretically capable of.
Disclaimer: Figures are taken from https://kevinzakka.github.io/2016/09/26/applying-deep-learning/ which summarizes Andrew Ng's talk.
A successful AI agent should communicate. It is all about language. It should understand and explain itself in words in order to communicate us. All of these spark with the "meaning" of words which the atomic part of human-wise communication. This is one of the fundamental problems of Natural Language Processing (NLP).
"meaning" is described as "the idea that is represented by a word, phrase, etc. How about representing the meaning of a word in a computer. The first attempt is to use some kind of hardly curated taxonomies such as WordNet. However such hand made structures not flexible enough, need human labor to elaborate and do not have semantic relations between words other then the carved rules. It is not what we expect from a real AI agent.
Then NLP research focused to use number vectors to symbolize words. The first use is to donate words with discrete (one-hot) representations. That is, if we assume a vocabulary with 1K words then we create a 1K length 0 vector with only one 1 representing the target word. Continue reading Why do we need better word representations ?→
Decorators are handy sugars for Python programmers to shorten things and provides more concise programming.
For instance you can use decorators for user authentication for your REST API servers. Assume that, you need to auth. the user for before each REST calls. Instead of appending the same procedure to each call function, it is better to define decorator and tagging it onto your call functions.
Let's see the small example below. I hope it is self-descriptive.
How to use Decorators:
Decorators are functions called by annotations
Annotations are the tags prefixed by @
### Decorator functions ###
print "Hello Space!"
print "Hello Cosmos!"
@helloCosmos # annotation
@helloSpace # annotation
print "Hello World!"
### Above code is equivalent to these lines
# hello = helloSpace(hello)
# hello = helloCosmos(hello)
### Let's Try