Model Evaluation

Overview


Model evaluation refers to the process of making an estimate of the effectiveness of a trained machine learning algorithm. In standard statistics, evaluation is often accomplished by assuming a statistical process is generated by an assumed distribution and then a hypothesis test is performed in order to assess the goodness of fit of the fitted parameters.

In machine learning, this is often no done for a couple of reasons:
  • A model is constructed that does not have an analytical distribution
  • The modeler does not wish to make any assumptions about the underlying distribution


Forecast Error Measures


MAE - mean absolute error
{% MAPE = \frac{1}{n} \sum_n |e_t| %}
MAPE - mean absolute percentage error
{% MAPE = \frac{1}{n} \sum_n |e_t|/d_t %}
RMSE - root mean square error
{% RMSE = \sqrt{ \frac{1}{n} \sum_n |e_t| } %}
RMSE - root mean square error as a percentage
{% RMSE = \sqrt{ \frac{1}{n} \sum_n |e_t| } / (\frac{1}{n} \sum d_t) %}

Using Reduction to Implement Error Measures


Implementing the above error measures


var mae = function(data, value, forecast){
  return data.reduce(function(total, current){
    return total + Math.abs(value(current)-forecast(current))
  }, 0);
}
Try it!



let err = await import('/lib/statistics/error/v1.0.0/regression.js');
let test = err.mae(...);

Contents