## Infinite Horizon Optimal Control: Deterministic and Stochastic SystemsThis monograph deals with various classes of deterministic and stochastic continuous time optimal control problems that are defined over unbounded time intervals. For these problems the performance criterion is described by an improper integral and it is possible that, when evaluated at a given admissible element, this criterion is unbounded. To cope with this divergence new optimality concepts, referred to here as overtaking optimality, weakly overtaking optimality, agreeable plans, etc. , have been proposed. The motivation for studying these problems arises primarily from the economic and biological sciences where models of this type arise naturally. Indeed, any bound placed on the time hori zon is artificial when one considers the evolution of the state of an economy or species. The responsibility for the introduction of this interesting class of problems rests with the economists who first studied them in the modeling of capital accumulation processes. Perhaps the earliest of these was F. Ramsey [152] who, in his seminal work on the theory of saving in 1928, considered a dynamic optimization model defined on an infinite time horizon. Briefly, this problem can be described as a Lagrange problem with unbounded time interval. The advent of modern control theory, particularly the formulation of the famous Maximum Principle of Pontryagin, has had a considerable impact on the treat ment of these models as well as optimization theory in general. |

### What people are saying - Write a review

We haven't found any reviews in the usual places.

### Contents

Dynamical Systems with Unbounded Time Interval in Engineering | 1 |

Necessary Conditions and Sufficient Conditions for Optimality | 20 |

Asymptotic Stability and the Turnpike Property in Some Simple Con | 32 |

Copyright | |

10 other sections not shown

### Other editions - View all

Infinite Horizon Optimal Control Dean A Carlson,Alain B Haurie,Arie Leizarowitz No preview available - 1991 |

### Common terms and phrases

absolutely continuous admissible control admissible pair admissible trajectories assume Assumption 4.1 asymptotic stability Bellman equation bounded calculus of variations Chapter Co(r concave function consider constant constraint continuous function control system convergence convex defined denote deterministic emanating from ro erists example exists an overtaking fo(r given growth condition Hamilton-Jacobi equation Hamiltonian system Hence holds horizon optimal control implies infinite horizon optimal infinity initial value integral JT(ro JT(zo Lebesgue measure Lemma linear Lipschitz continuous matrix maximization maximum principle measurable function modified Hamiltonian system Moreover negative definite nonpositive obtained optimal control problem overtaking optimal solution Proposition reduction to finite Remark result Riccati equation Section semigroup sequence ſº stationary point sufficient conditions trajectory emanating turnpike property u e U(r uniformly unique upper semicontinuous variable weakly overtaking optimal

### References to this book

Control and Game-Theoretic Models of the Environment Jerzy Filar,Carlo Carraro No preview available - 1995 |