# Optimal Control

Optimal control is a method of controlling a system in a way that minimizes a given objective. It’s suited for complex systems and applications.

- It’s Control (stability and robustness of the control system) + performance (optimality)

Optimal control is framing control as an optimization problem.

[!ad-example] Examples of Optimal Control Problems

- finding the optimal trajectory for a spacecraft to reach a distant planet with minimal fuel consumption.
- finding the optimal control inputs for a robot arm to perform a task with minimal error
- finding the optimal control inputs to regulate the temperature of a furnace at a desired setpoint while minimizing energy consumption.

Great reference: Linear Quadratic Methods by Anderson and Moore

Strong similarity with Kalman Filter which is able to compute Bayes Filter updates.

Let $g(x,u):X×U→R_{+}$ associate state-input pairs with a cost “density”. $u(t)$ is the control output, see PID Control for same notation

Define: $L(u)=∫_{t=0}g(x(t),u(t))dt$ where $x_{′}(t)=f(x(t),u(t))$ for all $t∈[0,T]$ (T might be infinity).

Optimal control problem is to find $u(t)$ such that $L(u)$ is minimized.

### Robustness

For the problem of robustness (dealing with uncertainty), there are multiple solutions being researched:

- Robust Optimization
- Robust Control
- Adaptive Control
- Robustness by using a combination of models
- Hybrid Control

The best method to use depends on the specific application and the nature of the uncertainties and disturbances in the system’s dynamics.

### Software

There is this software casadi that seems to work for it.