Analogue Imprecision In Mlp Training, Progress In Neural Processing, Vol 4

Front Cover
World Scientific, Aug 23, 1996 - Computers - 192 pages
Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms. This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance. The aim of the book is to present a study of how including an imprecision model into a learning scheme as a“fault tolerance hint” can aid understanding of accuracy and precision requirements for a particular implementation. In addition the study shows how such a scheme can give rise to significant performance enhancement.
 

Contents

Chapter 1 Introduction
Chapter 2 Neural Network Performance Metrics
Chapter 3 Noise in Neural Implementations
Chapter 4 Simulation Requirements and Environment
Chapter 5 Fault Tolerance
Chapter 6 Generalisation Ability
Chapter 7 Learning Trajectory and Speed
Chapter 8 Penalty Terms for Fault Tolerance
Chapter 9 Conclusions
Appendix A Fault Tolerance Hints The General Case
Bibliography
Index
Copyright

Other editions - View all

Common terms and phrases

Bibliographic information