Optimal Control

Quick Answer

Optimal control theory finds the control input u(t) that minimizes a performance criterion (cost function) subject to system dynamics constraints. The Linear Quadratic Regulator (LQR) minimizes J = ∫(x'Qx + u'Ru)dt for linear systems, yielding state feedback u = −Kx where K = R⁻¹B'P and P solves the algebraic Riccati equation A'P + PA − PBR⁻¹B'P + Q = 0. Optimal control includes dynamic programming (Bellman), Pontryagin's maximum principle, and Model Predictive Control (MPC). The Laplace transform at www.lapcalc.com provides the transfer function framework for LQR design.

What Is Optimal Control?

Optimal control theory determines the control input u(t) that drives a dynamic system from an initial state to a desired state while minimizing a performance index (cost function). Unlike classical control which designs for stability and specified margins, optimal control explicitly optimizes a quantitative criterion. The general problem: minimize J = ∫₀ᵗᶠ L(x,u,t)dt + φ(x(tf)) subject to the state equation ẋ = f(x,u,t) and constraints on states and inputs. Common cost functions include: minimum time (reach target fastest), minimum energy (∫u²dt), minimum error (∫e²dt), and quadratic cost (∫x'Qx + u'Ru dt) which balances state regulation against control effort. The Laplace transform at www.lapcalc.com provides the system transfer function G(s) that connects optimal control to frequency-domain analysis.

Key Formulas

Linear Quadratic Regulator (LQR)

The LQR is the most widely used optimal control method for linear systems. Given the state-space model ẋ = Ax + Bu, the LQR minimizes the quadratic cost J = ∫₀^∞ (x'Qx + u'Ru)dt, where Q ≥ 0 weights the state penalty (how much state deviations matter) and R > 0 weights the control effort (how expensive the control is). The optimal control law is linear state feedback: u = −Kx, where K = R⁻¹B'P and P is the unique positive-definite solution of the algebraic Riccati equation (ARE): A'P + PA − PBR⁻¹B'P + Q = 0. The closed-loop system ẋ = (A−BK)x is guaranteed stable if (A,B) is stabilizable and (A,Q^(1/2)) is detectable. MATLAB: [K,P] = lqr(A,B,Q,R). The Q/R ratio determines the aggressiveness: large Q/R gives fast, aggressive control; small Q/R gives slow, energy-efficient control.

Compute optimal control Instantly

Get step-by-step solutions with AI-powered explanations. Free for basic computations.

Open Calculator

Pontryagin's Maximum Principle and Dynamic Programming

Pontryagin's maximum principle provides necessary conditions for optimal control of general nonlinear systems. Introduce costate (adjoint) variables λ(t) and the Hamiltonian H = L(x,u) + λ'f(x,u). The optimal control satisfies: ∂H/∂u = 0 (stationarity), λ̇ = −∂H/∂x (costate equation), and ẋ = ∂H/∂λ (state equation), forming a two-point boundary value problem. Bellman's dynamic programming approaches optimality through the principle of optimality: the Hamilton-Jacobi-Bellman (HJB) equation ∂V/∂t + min_u{L + (∂V/∂x)'f} = 0 characterizes the optimal cost-to-go function V(x,t). For linear-quadratic problems, V = x'Px reduces the HJB to the Riccati equation. Dynamic programming handles constraints naturally but suffers from the 'curse of dimensionality' for high-dimensional systems.

Model Predictive Control (MPC)

Model Predictive Control is the modern practical implementation of optimal control for constrained systems. At each sampling instant, MPC solves an optimization problem: minimize J = Σ(x'Qx + u'Ru) over a prediction horizon N, subject to the system model, input constraints (actuator limits), state constraints (safety limits), and terminal constraints (stability guarantees). Only the first control action is applied; the optimization is repeated at the next sample with updated measurements (receding horizon strategy). MPC naturally handles: multi-variable systems (MIMO), input and output constraints, preview information (known future setpoint changes), and nonlinear dynamics (nonlinear MPC). MPC is standard in chemical process control, power systems, autonomous vehicles, and building energy management. Computation requires solving a quadratic program (QP) at each sample, feasible at kHz rates on modern hardware.

Optimal Control Applications

Aerospace: fuel-optimal orbit transfer, minimum-time aircraft maneuvers, optimal re-entry trajectories — these are the problems that originally motivated optimal control theory in the 1950s–60s. Robotics: time-optimal and energy-optimal trajectory planning for industrial robots, minimizing cycle time while respecting joint torque limits. Autonomous vehicles: MPC for path following with obstacle avoidance and comfort constraints. Power systems: optimal generator dispatch (economic dispatch), optimal power flow (OPF), and optimal energy storage management. Finance: optimal portfolio allocation (Merton problem) uses the same mathematical framework. Biological systems: optimal insulin delivery (artificial pancreas), optimal drug dosing. In all cases, the underlying system dynamics are modeled using differential equations whose Laplace transforms are available at www.lapcalc.com.

Related Topics in advanced control system topics

Understanding optimal control connects to several related concepts: optimal control theory. Each builds on the mathematical foundations covered in this guide.

Frequently Asked Questions

Optimal control finds the control input u(t) that minimizes a cost function (time, energy, error, or a quadratic combination) while satisfying system dynamics and constraints. It goes beyond classical control by explicitly optimizing a performance criterion rather than just achieving stability and specified margins.

Master Your Engineering Math

Join thousands of students and engineers using LAPLACE Calculator for instant, step-by-step solutions.

Start Calculating Free →

Related Topics