Optimization of Stochastic Systems: Topics in Discrete-time Dynamics
From the Preface
The first edition of this book was written mainly for audiences with physical science and engineering backgrounds. Nevertheless, it reached some readers with economic and management science training. Analytical training of graduate students in economics and management sciences had progressed much in the last 20 years, and many new research results and optimization algorithms have also become available. My own interest in the meantime has shifted to the analysis of dynamics and optimization problems of economic and management science origin. With these developments and changes, I decided to rewrite much of the first edition to make it more accessible to graduate students and professionals in social sciences. I have also incorporated some new analytic tools that I deem useful in analyzing the dynamic and stochastic problems which confront these readers. I hope that my efforts successfully bring intertemporal optimization problems closer to economics professionals.
New topics introduced into this second edition appear mostly in Chapters 2, 4, 5, 6, and 8. Martingales and martingale differences are introduced early in Chapter 2. Some limit theorems and asymptotic properties of linear state space models driven by martingale differences are presented. Because many excellent books are available on martingales and their limit theorems, derivations and proofs are mostly sketchy, and readers are referred to these sources.
The results in Chapteer 2 are applied in Chapters 5, 6, and 8, among other places. The notion of dynamic aggregation and its relation to cointegration and error-correction models are developed in Chapter 4. Some recursive parameter estimation schemes and their statistical properties are included in Chapters 5 and 6. Here again, books devoted entirely to these topics are available in the literature, and much had to be omitted to keep the second edition to a manageable size. In an appendix to Chapter 7, a potentially very powerful tool in proving convergence of adaptive schemes is outlined. Rational expectations models and their solution methods are developed in Chapter 8 because of their wide-spread interest to economists. A very important class of problems in sequential decision problems revolves around questions of approximating nonlinear dynamics or more generally complex situations with a sequence of less complex ones. Chapter 9 does not begin to do justice to this class of problems but is included as being suggestive of works to be done.
When I first started contemplating the revision of the first edition, I benefited from a list of excellent suggestions from Rick van der Ploeg, though I did not necessarily incorporate all of his suggestions. Conversations with Thomas Sargent and Victor Solo were useful in organizing the material into the form of the second edition. I also benefited from discussions with Hashem Pesaran and correspondences with L. Broze in finalizing Chapter 8.
Some material in this book was used as lecture notes in a graduate course in the Department of Economics, University of California, Los Angeles, the winter quarter of 1987. I thank the participants in the course for many useful comments.
* This major revision of the First Edition addresses optimization problems stated in stochastic difference equations, which often contain uncertain or randomly varying parameters
* Presents a set of concepts and techniques useful in analyzing or controlling stochastic dynamic processes, with possible incompletely specified characteristics
* It discusses basic system properties such as:
* Stability and observability
* Dynamic programming formulations of optimal and adaptive control problems
* Parameter estimation schemes and their convergence behavior
* Solution methods for rational expectations models using martingale differences
What people are saying - Write a review
We haven't found any reviews in the usual places.
Linear Algebraic Equations
8 other sections not shown
Other editions - View all
adaptive Aoki approximation assumed assumption asymptotically becomes bounded calculated called Chapter coefficients components compute conditional consider constant contains control problems converges cost covariance decision defined definite Denote density function depends derive determined deterministic discussed distribution dynamic eigenvalues equal equation error estimate evaluated example exists exogenous expectation expression factor finite follows function given hence implies independent initial integral Kalman filter known later least squares linear Markov martingale difference matrix mean measure method minimizes noise Note observation obtain optimal control policy parameter past polynomial positive probability density problem quadratic random variables rational recursion relation representation respect rule satisfies sequence shows solution solved stable stationary statistics stochastic Substitute sufficient Suppose transfer function transform transition unique unit unknown variance vector write zero