A graduate level text that presents modern optimal control theory in a direct and organized manner. Relationships to the classical control theory are shown, as well as a root-locus approach to the design of steady-state controllers. The reader is encouraged to simulate and implement optimal controllers using personal computer programs. State variable and polynomial optimal controllers are discussed. Numerous practical examples are provided. Detailed appendixes on matrix algebra and compter software listings.
What people are saying - Write a review
We haven't found any reviews in the usual places.
OPTIMAL CONTROL OF DISCRETETIME SYSTEMS
OPTIMAL CONTROL OF CONTINUOUSTIME
5 other sections not shown
Other editions - View all
boundary conditions Bryson closed-loop plant closed-loop poles closed-loop system component compute constant constraint control input cost index cost kernel cost to go costate equations CTLQR define derive desired digital control discrete dynamic programming eigenvalues eigenvectors Example FIGURE final condition final costate find the optimal fixed-final-state free-final-state function gramian Hamiltonian system increment initial condition integration interval Kalman gain Let the plant linear quadratic LQ regulator Lyapunov equation minimize minimum-fuel nonlinear Note open-loop control optimal control optimal control problem optimal cost optimal feedback gain optimal state trajectory performance index polynomial Pontryagin's minimum principle positive definite reachability RETURN END Riccati equation root locus Scalar System Section shown in Fig simulation solve stable stationarity condition stationary point suboptimal SUBROUTINE Suppose switching symmetric Table time-invariant time-varying time-varying system tracker vector write xk+l yields