State Variable Methods in Automatic ControlUsing numerous worked examples and problems, this book provides and introduction to the use of state space methods in control system analysis and design. It has been written for those with a very basic background in linear control theory and linear algebra. It provides physical insight into the various procedures considered as well as presenting the theoretical derivations. State Variable Methods in Automatic Control will be of interest to students and lecturers involved with courses on control engineering, specifically those involving state space methods. It will also be valuable for professional engineers involved in systems and circuit theory, as well as control system design. |
Contents
MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS | 1 |
CANONICAL FORMS AND MINIMAL REALIZATIONS | 78 |
STATE FEEDBACK AND DECOUPLING | 112 |
Copyright | |
4 other sections not shown
Other editions - View all
Common terms and phrases
Ab₁ algorithm asymptotically stable Ax(t B₁ B₂ Bu(t C₁ Cayley-Hamilton theorem characteristic equation closed loop system consider control law controllability matrix controllable and observable controllable canonical form controllable subspace controllable system cov{x(t covariance matrix Cx(t decoupled system denoted derived det(sI determined du(t eigenvalues equivalent system Example exists F(sI following theorem functional observer Gaussian gives HFG(S input-output Kalman filter linear dynamical system linearly independent Lyapunov function matrix H(s measurement noise minimal realization multivariable system negative real non-singular obtain optimal control optimal estimate output y(t performance index poles polynomial positive definite problem quadratic criterion function rank Riccati equation roots satisfies shown in Figure single input system solution stochastic linear system system is given t₁ time-invariant transfer function matrix transfer matrix transition matrix x(to X₁ yields zero αι απ