Overview
Neural networks can be understood in purely mathematical terms as function composition. While it is not necessary to view neural networks this way, it is vital to understanding the neural network training methods and also to understanding the types of problems that can be solved with neural networks.
Neural Network as Function Composition
Function composition results when you take the output of one function and pass it to a second function, as in
{% \vec{y} = f(g(\vec{x})) %}
(The algebra of function composition is studied thoroughly with
category theory.)
In terms of perceptrons and activation functions, we can imagine one layer to look like
{% \vec{y} = a(p(\vec{x})) %}
where {% p %} is the affine function
{% p = b + \sum w_i x_i %}
and the activation function is labeled {% a %}
{% activation \, function = a(\vec{x}) %}
Adding feature extraction to the mix simply adds another function composition that processes the
inputs prior to passing them to the first layer of perceptrons.
{% \vec{y} = a_1(p_1(e(x))) %}
where {% e_1 %} is a feature extraction function.
Two Layer Neural Network
Given these definitions, a 2 layer neural network could be written as
{% \vec{y} = a_2(p_2(a_1(p_1(e(x))))) %}
Note, this network includes the feature extraction function, which is applied only to the inputs of the first layer.
Layer Implementation
A single layer, written as
{% \vec{Output} = Activation(M \times \vec{x}) %}
can then be easily implemented, using the
linear algebra library.
let calculate = function(inputs, weights){
let value = la.multiply(weights, inputs);
return value.map(p=>[activation(p[0])])
}
Try it!
Two Layer Implementation
Then a two layer neural network can be implemented as
let calculateLayer = function(inputs, weights){
let value = la.multiply(weights, inputs);
return value.map(p=>[activation(p[0])])
}
let calculate = function(inputs){
let y1 = calculateLayer(inputs, weights1);
let y2 = calculateLayer(y1, weights2);
return y2;
}
Try it!