Handbook of Learning and Approximate Dynamic Programming

Front Cover
John Wiley & Sons, 2004 - Computers - 644 pages
2 Reviews

Now entering its fourth edition, the market-leading Handbook of MRI Technique has been fully revised and updated to incorporate new technologies and developments essential to good practice. Written specifically for technologists and highly illustrated, it guides the uninitiated through scanning techniques and helps more experienced technologists to improve image quality.

The first part of the book considers the main aspects of theory that relate to scanning and also includes practical tips on gating, equipment use, patient care and safety, and information on contrast media. The second half provides step-by-step instruction for examining each anatomical area, beginning with a basic anatomy section followed by sections on indications, patient positioning, equipment, artefacts and tips on optimizing image quality.

  • Written by an international team of technologists from the United States, United Kingdom and Europe
  • Suitable for users for all types of MRI systems
  • Now includes key points throughout for quick reference
  • Companion website at www.wiley.com/go/westbrook/mritechnique with self-assessment and image flashcards

Handbook of MRI Technique continues to be the ideal support both for radiographers new to MRI and for regular users looking for information on alternative techniques and suggestions on protocol modifications.

 

What people are saying - Write a review

We haven't found any reviews in the usual places.

Contents

Foreword
1
Reinforcement Learning and Its Relationship to Supervised Learning
47
ModelBased Adaptive Critic Designs
65
Guidance in the Use of Adaptive Critics for Control
97
Direct Neural Dynamic Programming
125
The Linear Programming Approach to Approximate Dynamic
153
g Discussion
173
Reinforcement Learning in Large HighDimensional State Spaces
179
g Conclusion
279
Hierarchical Approaches to Concurrency Multiagency
285
Learning and Optimization From a System Theoretic Perspective
311
Robust Reinforcement Learning Using IntegralQuadratic
337
Supervised ActorCritic Reinforcement Learning
359
NearOptimal Control Via Reinforcement Learning
407
Multiobjective Control Problems by Reinforcement Learning
433
Adaptive Critic Based Neural Network for ControlConstrained
463

g Hierarchical Decision Making
203
Hierarchical Remforcement Learning in Theory
209
Hierarchical Remforcement Learning in Practice
217
IntraBehavior Learmng
223
Improved Temporal Difference Methods with Linear Function
235
Approximate Dynamic Programming for HighDimensional
261
Applications of Approximate Dynamic Programming in Power Systems
479
Robust Reinforcement Learning for floating Ventilation
517
Helicopter Flight Control Using Direct Neural Dynamic Programming
535
Toward Dynamic Stochastic Optimal Power Flow
561
Control Optimization Security and Selfhealing of Benchmark
599
Copyright

Other editions - View all

Common terms and phrases

References to this book

All Book Search results »

About the author (2004)

JENNIE SI is Professor of Electrical Engineering, Arizona State University, Tempe, AZ. She is director of Intelligent Systems Laboratory, which focuses on analysis and design of learning and adaptive systems. In addition to her own publications, she is the Associate Editor for IEEE Transactions on Neural Networks, and past Associate Editor for IEEE Transactions on Automatic Control and IEEE Transactions on Semiconductor Manufacturing. She was the co–chair for the 2002 NSF Workshop on Learning and Approximate Dynamic Programming.

ANDREW G. BARTO is Professor of Computer Science, University of Massachusetts, Amherst. He is co–director of the Autonomous Learning Laboratory, which carries out interdisciplinary research on machine learning and modeling of biological learning. He is a core faculty member of the Neuroscience and Behavior Program of the University of Massachusetts and was the co–chair for the 2002 NSF Workshop on Learning and Approximate Dynamic Programming. He currently serves as an associate editor of Neural Computation.

WARREN B. POWELL is Professor of Operations Research and Financial Engineering at Princeton University. He is director of CASTLE Laboratory, which focuses on real–time optimization of complex dynamic systems arising in transportation and logistics.

DONALD C. WUNSCH is the Mary K. Finley Missouri Distinguished Professor in the Electrical and Computer Engineering Department at the University of Missouri, Rolla. He heads the Applied Computational Intelligence Laboratory and also has a joint appointment in Computer Science, and is President–Elect of the International Neural Networks Society.

Bibliographic information