DeepLearning2.7, Weight update: mean input and bias problem
From Wulfram Gerstner
views
comments
From Wulfram Gerstner
For rectified linear units (ReLU) in the hidden layers, a backprop update step shifts the mean which generates a bias problem that needs to be compensated.