Optimal Control Theory for Infinite Dimensional Systems

Front Cover
Birkhäuser, 1995 - Control theory - 448 pages
0 Reviews
Infinite dimensional systems can be used to describe many physical phenomena in the real world. Well-known examples are heat conduction, vibration of elastic material, diffusion-reaction processes, population systems and others. Thus, the optimal control theory for infinite dimensional systems has a wide range of applications in engineering, economics and some other fields. On the other hand, this theory has its own mathematical interests since it is regarded as a generalization for the classical calculus of variations and it generates many interesting mathematical questions. The Pontryagin maximum principle, the Bellman dynamic programming method and the Kalman optimal linear quadratic regulator theory are regarded as the three milestones of modern (finite dimensional) control theory. Since the 1960s, the corresponding theory for infinite dimensional systems has also been developed. The essential difficulties for the infinite dimensional theory come from two aspects: the unboundedness of the differential operator or the generator of the strongly continuous semigroup and the lack of the local compactness of the underlying spaces. The purpose of this book is to introduce optimal control theory for infinite dimensional systems. The authors present the existence theory for optimal control problems. Some applications are also included in this volume.

What people are saying - Write a review

We haven't found any reviews in the usual places.

Other editions - View all

References to this book

All Book Search results »

Bibliographic information