Markov Models & OptimizationThis book presents a radically new approach to problems of evaluating and optimizing the performance of continuous-time stochastic systems. This approach is based on the use of a family of Markov processes called Piecewise-Deterministic Processes (PDPs) as a general class of stochastic system models. A PDP is a Markov process that follows deterministic trajectories between random jumps, the latter occurring either spontaneously, in a Poisson-like fashion, or when the process hits the boundary of its state space. This formulation includes an enormous variety of applied problems in engineering, operations research, management science and economics as special cases; examples include queueing systems, stochastic scheduling, inventory control, resource allocation problems, optimal planning of production or exploitation of renewable or non-renewable resources, insurance analysis, fault detection in process systems, and tracking of maneuvering targets, among many others. The first part of the book shows how these applications lead to the PDP as a system model, and the main properties of PDPs are derived. There is particular emphasis on the so-called extended generator of the process, which gives a general method for calculating expectations and distributions of system performance functions. The second half of the book is devoted to control theory for PDPs, with a view to controlling PDP models for optimal performance: characterizations are obtained of optimal strategies both for continuously-acting controllers and for control by intervention (impulse control). Throughout the book, modern methods of stochastic analysis are used, but all the necessary theory is developed from scratch and presented in a self-contained way. The book will be useful to engineers and scientists in the application areas as well as to mathematicians interested in applications of stochastic analysis. |
Contents
Preface | |
Analysis probability and stochastic processes | |
Analysis | |
Probability theory 13 Stochastic processes | |
Markov processes | |
Notes and references | |
Piecewisedeterministic Markov processes 21 Markov models and supplementary variables | |
Ordinary differential equations and vector fields | |
Notes and references | |
Control theory 41 Feedback control of PDPs | |
Naïve dynamic programming | |
Relaxed controls | |
Control via discretetime dynamic programming | |
Nonsmooth analysis and deterministic optimal control | |
the generalized Bellman equation | |
Discretestage Markov decision models | |
Simulation 24 Definition of the | |
The strong Markov property | |
The extended generator of the | |
Further Markov properties of the PDP 28 Notes and references | |
Distributions and expectations 31 The differential formula and transformations of PDPs | |
Expectations | |
Applications | |
Stationary distributions | |
Notes and references | |
Control by intervention | |
Jump processes and their martingales | |
Bibliography | |
Other editions - View all
Common terms and phrases
Analysis applications Assumptions 41.1 Bellman equation Borel boundary condition Ch(E Chapter compact computing constant construction control of piecewise-deterministic control problem convergence cost countable Davis define definition Dempster denote deterministic discrete-time dynamic programming equivalent example exists finite fixed point follows formula function f given hence holds implies impulse control inequality integrand intervention jump process Lemma Lipschitz continuous local martingale Markov property measurable function Mt=Mtg natural filtration notation O-field optimal control optimal stopping optimal stopping problem piecewise-deterministic Markov processes piecewise-deterministic processes Poisson process probability measure probability space Proposition queue random variables right-continuous sample paths satisfies sequence Springer-Verlag stationary distribution stochastic control stochastic process strategy strong Markov property subset Suppose teſ theory topology trajectory transition function u.i. martingale unique solution value function