Analogue Imprecision in MLP Training

Front Cover
World Scientific, 1996 - Computers - 178 pages
0 Reviews
Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms. This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance. The aim of the book is to present a study of how including an imprecision model into a learning scheme as a“fault tolerance hint” can aid understanding of accuracy and precision requirements for a particular implementation. In addition the study shows how such a scheme can give rise to significant performance enhancement.
 

What people are saying - Write a review

We haven't found any reviews in the usual places.

Contents

Introduction
1
Neural Network Performance Metrics
15
Noise in Neural Implementations
37
Simulation Requirements and Environment
63
Fault Tolerance
81
Generalisation Ability
99
Learning Trajectory and Speed
121
Penalty Terms for Fault Tolerance
137
Conclusions
147
Appendix A Fault Tolerance Hints The General Case
159
Bibliography
165
Index
173
Copyright

Common terms and phrases

Bibliographic information