MACHINE LEARNING

Gradual Training with Tacotron for Faster Convergence

Tacotron is a commonly used Text-to-Speech architecture. It is a very flexible alternative over traditional solutions. It only requires text and corresponding voice clips to train the model. It avoids the toil of fine-grained annotation of the data. However, Tacotron might also be very time demanding to train, especially if you don't know the right … Continue reading Gradual Training with Tacotron for Faster Convergence

Share
Posted in Machine Learning, Research, Research Notes, Uncategorized | Tagged , , , , , , | Leave a comment

Irregular Regularization Methods.

Mixup - https://arxiv.org/abs/1710.09412.pdf Shake and Shake - https://arxiv.org/pdf/1705.07485.pdf Mix Feat - https://openreview.net/forum?id=HygT9oRqFX Speed perturbation (ASR) - https://www.danielpovey.com/files/2015_interspeech_augmentation.pdf Please feel free to extend the list...

Share
Posted in Machine Learning, Research Notes | Tagged , , , | Leave a comment

Text to Speech Deep Learning Architectures

Small Intro. and Background Recently, I started at Mozilla Research. I am really excited to be a part of a small but great team working hard to solve important ML problems. And everything is open-sourced. We license things to make open-sourced. Oxymoron by first sight isn't it. But I like it !! Before my presence, … Continue reading Text to Speech Deep Learning Architectures

Share
Posted in Machine Learning, Research, Research Notes | Tagged , , , , , , , | Leave a comment

Why mere Machine Learning cannot predict Bitcoin price

Lately, I study time series to see something more out the limit of my experience. I decide to use what I learn in cryptocurrency price predictions with a hunch of being rich. Kidding? Or not :).  As I see more about the intricacies of the problem I got deeper and I got a new challenge … Continue reading Why mere Machine Learning cannot predict Bitcoin price

Share
Posted in Machine Learning, Research, Research Notes | Tagged , , , , | Leave a comment

Online Hard Example Mining on PyTorch

Online Hard Example Mining (OHEM) is a way to pick hard examples with reduced computation cost to improve your network performance on borderline cases which generalize to the general performance. It is mostly used for Object Detection. Suppose you like to train a car detector and you have positive (with car) and negative images (with … Continue reading Online Hard Example Mining on PyTorch

Share
Posted in CodeBook, Machine Learning | Tagged , , , , , | Leave a comment

Paper review: EraseReLU

paper: https://arxiv.org/pdf/1709.07634.pdf ReLU is defined as a way to train an ensemble of exponential number of linear models due to its zeroing effect. Each iteration means a random set of active units hence, combinations of different linear models. They discuss, relying on the given observation, it might be useful to remove non-linearities for some layers … Continue reading Paper review: EraseReLU

Share
Posted in Machine Learning, Research, Research Notes | Tagged , , | Leave a comment

Designing a Deep Learning Project

There are numerous on-line and off-line technical resources about deep learning. Everyday people publish new papers and write new things. However, it is rare to see resources teaching practical concerns for structuring a deep learning projects; from top to bottom, from problem to solution. People know fancy technicalities but even some experienced people feel lost … Continue reading Designing a Deep Learning Project

Share
Posted in Machine Learning, Research Notes | Tagged , , , | 2 Comments

Paper Review: Self-Normalizing Neural Networks

One of the main problems of neural networks is to tame layer activations so that one is able to obtain stable gradients to learn faster without any confining factor. Batch Normalization shows us that keeping values with mean 0 and variance 1 seems to work things. However, albeit indisputable effectiveness of BN, it adds more … Continue reading Paper Review: Self-Normalizing Neural Networks

Share
Posted in Machine Learning, Research Notes | Tagged , , , , , | 11 Comments

Paper Notes: The Shattered Gradients Problem ...

paper: https://arxiv.org/abs/1702.08591 The whole heading of the paper is "The Shattered Gradients Problem: If resnets are the answer, then what is the question?". It is really interesting work with all its findings about gradient dynamics of neural networks. It also examines Batch Normalization (BN) and Residual Networks (Resnet) under this problem. The problem, dubbed "Shattered Gradients", described as … Continue reading Paper Notes: The Shattered Gradients Problem ...

Share
Posted in Computer Vision, Machine Learning, Research Notes | Tagged , , , , , , | Leave a comment

Dilated Convolution

In simple terms, dilated convolution is just a convolution applied to input with defined gaps. With this definitions, given our input is an 2D image, dilation rate k=1 is normal convolution and k=2 means skipping one pixel per input and k=4 means skipping 3 pixels. The best to see the figures below with the same k … Continue reading Dilated Convolution

Share
Posted in Computer Vision, Machine Learning, Research Notes | Tagged , , , , , | 5 Comments
Share