Optimization of Stochastic Systems: Topics in Discrete-time Dynamics
From the Preface
The first edition of this book was written mainly for audiences with physical science and engineering backgrounds. Nevertheless, it reached some readers with economic and management science training. Analytical training of graduate students in economics and management sciences had progressed much in the last 20 years, and many new research results and optimization algorithms have also become available. My own interest in the meantime has shifted to the analysis of dynamics and optimization problems of economic and management science origin. With these developments and changes, I decided to rewrite much of the first edition to make it more accessible to graduate students and professionals in social sciences. I have also incorporated some new analytic tools that I deem useful in analyzing the dynamic and stochastic problems which confront these readers. I hope that my efforts successfully bring intertemporal optimization problems closer to economics professionals.
New topics introduced into this second edition appear mostly in Chapters 2, 4, 5, 6, and 8. Martingales and martingale differences are introduced early in Chapter 2. Some limit theorems and asymptotic properties of linear state space models driven by martingale differences are presented. Because many excellent books are available on martingales and their limit theorems, derivations and proofs are mostly sketchy, and readers are referred to these sources.
The results in Chapteer 2 are applied in Chapters 5, 6, and 8, among other places. The notion of dynamic aggregation and its relation to cointegration and error-correction models are developed in Chapter 4. Some recursive parameter estimation schemes and their statistical properties are included in Chapters 5 and 6. Here again, books devoted entirely to these topics are available in the literature, and much had to be omitted to keep the second edition to a manageable size. In an appendix to Chapter 7, a potentially very powerful tool in proving convergence of adaptive schemes is outlined. Rational expectations models and their solution methods are developed in Chapter 8 because of their wide-spread interest to economists. A very important class of problems in sequential decision problems revolves around questions of approximating nonlinear dynamics or more generally complex situations with a sequence of less complex ones. Chapter 9 does not begin to do justice to this class of problems but is included as being suggestive of works to be done.
When I first started contemplating the revision of the first edition, I benefited from a list of excellent suggestions from Rick van der Ploeg, though I did not necessarily incorporate all of his suggestions. Conversations with Thomas Sargent and Victor Solo were useful in organizing the material into the form of the second edition. I also benefited from discussions with Hashem Pesaran and correspondences with L. Broze in finalizing Chapter 8.
Some material in this book was used as lecture notes in a graduate course in the Department of Economics, University of California, Los Angeles, the winter quarter of 1987. I thank the participants in the course for many useful comments.
* This major revision of the First Edition addresses optimization problems stated in stochastic difference equations, which often contain uncertain or randomly varying parameters
* Presents a set of concepts and techniques useful in analyzing or controlling stochastic dynamic processes, with possible incompletely specified characteristics
* It discusses basic system properties such as:
* Stability and observability
* Dynamic programming formulations of optimal and adaptive control problems
* Parameter estimation schemes and their convergence behavior
* Solution methods for rational expectations models using martingale differences
What people are saying - Write a review
We haven't found any reviews in the usual places.
Stochastic Control Problems
7 other sections not shown
Other editions - View all
adaptive control Aoki approximation assumed assumption asymptotically stable Chapter closed-loop coefficients compute conditional expectation conditional probability consider constant control problems control variable converges cost covariance matrix criterion function decision problems defined Denote derive deterministic discussed distribution function dynamic equation dynamic models dynamic system eigenvalues endogenous variables error example exogenous expression factor finite Gaussian given goes to infinity independent Kalman filter known least squares estimate linear Lyapunov Markov chain Markov process Markovian martingale difference mean zero method minimizes nonlinear nonrandomized Note observation equation obtain open-loop optimal control policy optimization problems orthogonal polynomial probability density function probability distribution quadratic random variables rational expectations rational expectations models reachable recursion equation relation scalar shows solution solved space model spectral density stochastic control stochastic process stochastic systems sufficient statistics theorem transfer function uncorrelated unit circle unknown parameter variance weakly stationary x,+i y,+i