Regularization

Overview


Regularization is the process of adding additional terms to the loss function in a learning algorithm, in order to bias the result toward a particular set of parameters.

The typical use of regularization is to create a bias for the model parameters toward zero. (see ridge regression for example.)

In general, regularization can be shown to be equivalent to specific form of Bayesianism. That is, rather than providing a prior distribution for the parameters, the regularization creates a bias towards those values that would have come from the prior.

Regularization Algorithms


Contents