Neural Networks

Overview


Neural networks are one of the workhorses of machine learning. The Universal Approximation Theorem gaurantees the ability of a neural network to approximate any function to an arbitrary degree of accuracy.

Layer of Perceptrons


The first step to building a neural network is to consider a set of perceptrons each of which has the same set of inputs.

The following is a depiction of 9 perceptrons, each with the same set of 6 inputs.

Stacked Layers


The next step of building a neural network is to take the outputs of the layer of perceptrons and use those outputs as the inputs to another layer of perceptrons.
The Universal Approximation Theorem shows that in general, one does need more than 2 layers to approximate any continuous function, however, more layers with fewer nodes may improve the performance.

Activation Functions


After a the weights and connection values are multliplied and summed in a pereceptron, the output is then passed through a step function before being output. The step function is known as an activation function in neural network literature. However, in a neural network, the activation function is typically chosen to be some other function than the step function. This is typically done in order to get a function that is smooth and smoothly differentiable and is used when using backpropagation as the training algorithm.

For more information, please see Activation Functions

Topics


Video Demos


Video Overview
An overview of the basic functionality of a neural network.

Contents