Neural Networks as Function Composition

Overview


Neural networks can be understood in purely mathematical terms as function composition. While it is not necessary to view neural networks this way, it is vital to understanding the neural network training methods and also to understand the types of problems that can be solved with neural networks.

Neural Network as Function Composition


{% \vec{y} = f(g(\vec{x})) %}
In terms of perceptrons and activation functions, we can imagine one layer to look like
{% \vec{y} = a(p(\vec{x})) %}
where
{% p = b + \sum w_i x_i %}
and
{% activation \, function = a(\vec{x}) %}
Then a 2 layer neural network could be written as
{% \vec{y} = a_2(p_2(a_1(p_1(x)))) %}
Adding feature extraction to the mix simply adds another function composition that processes the inputs prior to passing them to the first layer of perceptrons.
{% \vec{y} = a_1(p_1(e_1(x))) %}
where {% e_1 %} is a feature extraction function.

Layer Implementation


{% \vec{Output} = Activation(M \times \vec{x}) %}
This can be implement simply, using the linear algebra library.

let calculate = function(inputs, weights){
  let value = la.multiply(weights, inputs);
  return value.map(p=>[activation(p[0])])
}
					
Try it!

Two Layer Implementation



let calculateLayer = function(inputs, weights){
  let value = la.multiply(weights, inputs);
  return value.map(p=>[activation(p[0])])
}

let calculate = function(inputs){
  let y1 = calculateLayer(inputs, weights1);
  let y2 = calculateLayer(y1, weights2);
  return y2;
}
					
Try it!

Contents