Overview
Optimal control problems are problems in decision making where the problem evolves over time. That is, there is some objective that can only be achieved by applying some force or other resource to the problem, and the object cannot be achieved immediately. The system will evolve toward or away from the objective based in part on the resources applied and to random noise.
THe goal of optimal control is to achieve the stated objective, in some optimal fashion. Optimal control will continuously monitor its current state and make continuous adjustments in order to achieve that objective.
Statement of the Problem
The problem consists of two vectors.
- {% \vec{x} \in \mathbb{R}^n %} the state vector. THis is the state of the system that is trying to be controlled
- {% \vec{u}(x) \in \mathbb{R}^m %} is the control vector. This vector is set and controlled by the user (controller). The evolution of the state vector is correlated in some specified way to the control vector. That is, the user sets the control vector, and the state vector responds, possibly with some random noise.