Function Approximation using Optimization
Overview
One of the tasks of function approximation is to find a function that matches a set of predefined points as closely as
possible. For example, one may have a set of measurements that one wishes to approximate.
To use an optimization to find an approximate to a set of points, one must specify an error function that specifies
how close (or far) a given function is from approximating the given data set. For example, one may define an error
function as follows:
{% Error = \sum_i |f(x_i) - y_i| %}
where we have a set of points, {%(x_i,y_y)%}. Then, we find the best approximating function by selecting the function that
minimizes this error. Of course, this isnt the only error function that can be used. Many times, the squared error function
is used.
{% Error = \sum_i (f(x_i) - y_i)^2 %}
Function Choice
Once an error function has been specified, we also need to specify the functions that we will consider as possible
approximating functions to the given dataset. That is, are we seeking a linear function, or a quadratic function or
what type of function are we trying to find. Constraining the function in this way is necessary in order to run
the optimization. Often the function specified is provided by a theory about the nature of the
data generating process that generated the dataset.
Typically when we specify a funcion, we specify the form of the function and provide a number of parameters that
determine the exact function. For example, a linear function may be specified as
{% f(x) = a + bx %}
Here, there are two unspecified parameters, a and b. The typical way to solve this is to used
gradient descent
let gd = await import('/lib/optimization/v1.0.0/gradient-descent.mjs');
//data is taken from y = 5 + 2x
let data = [{x:1,y:7}, {x:2, y:9}, {x:3,y:11}];
let error = function(a,b){
let f = function(x){
return a+b*x;
};
let total = 0;
data.forEach(p=>{
total += Math.abs(f(p.x) - p.y);
});
return total;
};
let test = gd.descend(error, ()=>0.01, 0.001, [0,0], 100000);
Try it!