Control

Optimal Control

Optimal control is a method of controlling a system in a way that minimizes a given objective. It’s suited for complex systems and applications.

  • It’s Control (stability and robustness of the control system) + performance (optimality)

Optimal control is framing control as an optimization problem.

[!ad-example] Examples of Optimal Control Problems

  • finding the optimal trajectory for a spacecraft to reach a distant planet with minimal fuel consumption.
  • finding the optimal control inputs for a robot arm to perform a task with minimal error
  • finding the optimal control inputs to regulate the temperature of a furnace at a desired setpoint while minimizing energy consumption.

Great reference: Linear Quadratic Methods by Anderson and Moore

Strong similarity with Kalman Filter which is able to compute Bayes Filter updates.

Reference

Let associate state-input pairs with a cost “density”. is the control output, see PID Control for same notation

Define: where for all (T might be infinity).

Optimal control problem is to find such that is minimized.

Robustness

For the problem of robustness (dealing with uncertainty), there are multiple solutions being researched:

The best method to use depends on the specific application and the nature of the uncertainties and disturbances in the system’s dynamics.

Software

There is this software casadi that seems to work for it.