Tag Archives: deep learning

What is special about rectifier neural units used in NN learning?

Sigmoid unit :

f(x) = frac{1}{1+exp(-x)}

Tanh unit:

f(x) = tanh(x)

Rectified linear unit (ReLU):

f(x) = \sum_{i=1}^{\inf} \sigma (x - i + 0.5) \approx log(1 + e^{x})
we call;

  •  \sum_{i=1}^{inf} sigma (x - i + 0.5) as stepped sigmoid
  • log(1 + e^{x})  as softplus function

The softplus function can be approximated by max function (or hard max ) ie    max( 0, x + N(0,1)) . The max function is Continue reading What is special about rectifier neural units used in NN learning?

Share