Markov Models & Optimization

Front Cover
Routledge, Feb 19, 2018 - Mathematics - 308 pages
This book presents a radically new approach to problems of evaluating and optimizing the performance of continuous-time stochastic systems. This approach is based on the use of a family of Markov processes called Piecewise-Deterministic Processes (PDPs) as a general class of stochastic system models. A PDP is a Markov process that follows deterministic trajectories between random jumps, the latter occurring either spontaneously, in a Poisson-like fashion, or when the process hits the boundary of its state space. This formulation includes an enormous variety of applied problems in engineering, operations research, management science and economics as special cases; examples include queueing systems, stochastic scheduling, inventory control, resource allocation problems, optimal planning of production or exploitation of renewable or non-renewable resources, insurance analysis, fault detection in process systems, and tracking of maneuvering targets, among many others.

The first part of the book shows how these applications lead to the PDP as a system model, and the main properties of PDPs are derived. There is particular emphasis on the so-called extended generator of the process, which gives a general method for calculating expectations and distributions of system performance functions. The second half of the book is devoted to control theory for PDPs, with a view to controlling PDP models for optimal performance: characterizations are obtained of optimal strategies both for continuously-acting controllers and for control by intervention (impulse control). Throughout the book, modern methods of stochastic analysis are used, but all the necessary theory is developed from scratch and presented in a self-contained way. The book will be useful to engineers and scientists in the application areas as well as to mathematicians interested in applications of stochastic analysis.
 

Contents

Preface
Analysis probability and stochastic processes
Analysis
Probability theory 13 Stochastic processes
Markov processes
Notes and references
Piecewisedeterministic Markov processes 21 Markov models and supplementary variables
Ordinary differential equations and vector fields
Notes and references
Control theory 41 Feedback control of PDPs
Naïve dynamic programming
Relaxed controls
Control via discretetime dynamic programming
Nonsmooth analysis and deterministic optimal control
the generalized Bellman equation
Discretestage Markov decision models

Simulation 24 Definition of the
The strong Markov property
The extended generator of the
Further Markov properties of the PDP 28 Notes and references
Distributions and expectations 31 The differential formula and transformations of PDPs
Expectations
Applications
Stationary distributions
Notes and references
Control by intervention
Jump processes and their martingales
Bibliography
Subject index
Optimal stopping
Copyright

Other editions - View all

Common terms and phrases

About the author (2018)

Davis, M.H.A.

Bibliographic information