APT - maximum Likelihood

Overview


Arbitrage Pricing Theory

Maximum Likelihood


The likelihood of the drawn sample of points is.
{% Likelihood = \Pi \sqrt{1/2\pi \sigma^2 } \times e ^{-0.5 [(x-\mu)/\sigma]^2} %}
The log likelihood is then given by
{% logLikelihood = \sum log[ \sqrt{1/2\pi \sigma^2 } \times e ^{-0.5 [(x-\mu)/\sigma]^2} ] %}
{% = \sum log(\sqrt{1/2\pi \sigma^2}) + log[ e ^{-0.5 [(x-\mu)/\sigma]^2} ] %}
{% = \sum log(\sqrt{1/2\pi \sigma^2}) + -0.5 [(x-\mu)/\sigma]^2 ] %}
A neural network can be constructed to forecast the {% \mu %} and {% \sigma %} of the distribution based on the 3 factors. In order to do this, the neural network must have 3 inputs (one for each factor) and two outputs, representing {% \mu %} and {% \sigma %}.

Lastly, a loss function must be chosen. Here there is a bit of complexity. Most api's assume that you provide a target value that you can compare the predicted value to. In this case, the target values are unknown. Rather, we have to provide a loss function which is the negative log likelihood.

Fitting


Contents