Overview
Sensitivity analysis seeks to find how one variable responds to changes in another. For example, {% y %} may have some dependency on {% x %}, and sensitivity analysis would like to estimate how big {% \Delta y %} is when {% \Delta x %} is known.
Formal Definitions
If {% y %} is dependent on {% x %}, this is typically translated to mean that {% y %} is a function of {% x %}.
{% y = f(x) %}
Then assuming that the function is
differentiable,
the sensitivity of {% y %} to {% x %} would be the derivative of {% y %} with resepect to {% x %}.
{% \frac{df(x)}{dx} %}
Then
{% \Delta y \approx \frac{df(x)}{dx} \times \Delta x %}
Multiple Variables
When {% y %} is a function of several variables, the sensitity of {% y %} to any one of them is now the partial derivative with respect to that variable.
{% y = f(x_1,...,x_n) %}
{% \frac{\partial{f(x_1,...,x_n)}}{\partial{x_i}} %}
Then we have
{% \Delta y \approx \frac{\partial{f(x_1,...,x_n)}}{\partial{x_1}} \Delta x_1 +... + \frac{\partial{f(x_1,...,x_n)}}{\partial{x_n}} \Delta x_n %}
Sensitivity and Random Variables
In most real world scenarios, there is not an exact relationship between the two variables in question. In fact, there is usually a random element involved, meaning that the relationship is more like a correlation. In this case, y is not a function of x, but the expectation of y is, and assuming that the expectation is differentiable, then the logic above carries through using the expectation
{% \mathbb{E}[y] = f(x_1,...,x_n) %}