Introduction to Optimal Control |
Other editions - View all
Common terms and phrases
assume calculus of variations Chapter circle column components constant control input u(t Control Systems control theory corresponding dependent variables diagram differential equation dynamic programming eigenvalues eigenvectors Euler-Lagrange equation evaluations expression final value given by Equation Hamiltonian illustrate instant integral squared error interval isoperimetric problem Lagrange multiplier Laplace transform let us consider linear system matrix exp maximum principle maximum value method minimum norm obtain one-stage process optimal control input optimal control problem optimal trajectory optimum output p₁ phase plane plane possible quantity relationship represents result Riccati equation right-hand endpoint satisfied scalar second-order sequence servomechanism set of points shown in Fig side of Equation solved specified straight line Substituting Equation Suppose switching curve t₁ transfer the system transition matrix transversality condition unimodal function vector space vector-matrix Wiley written y₁ York zero λι