Analogue Imprecision in MLP Training
Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms. This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance. The aim of the book is to present a study of how including an imprecision model into a learning scheme as a?fault tolerance hint? can aid understanding of accuracy and precision requirements for a particular implementation. In addition the study shows how such a scheme can give rise to significant performance enhancement.
What people are saying - Write a review
We haven't found any reviews in the usual places.
accuracy accuracy and precision additive noise analogue hardware analogue imprecision analysis assessment back-propagation Chapter character encoder problem classification error computation considered de-stabilising discussed distribution effect enhancement scheme error function error surface eye/not-eye classifier fault model fault tolerance performance final solution forward pass generalisation ability generalisation performance give gradient gradient descent Graph showing hardware implementation hardware model Henon map hidden layer implications in-the-loop input Introduction issues learning algorithm learning trajectory levels of noise levels of synaptic limitations localisation problem mathematical analysis mean square error method minimised multiplicative noise network performance networks trained neural algorithms neural network neurons output parameters penalty term perceptron performance enhancements performance metrics precision random robust series prediction sigmoid curve sigmoid function simulated annealing simulation experiments solve stochastic Synaptic Noise Level synaptic weight noise test problems tion trained network training data trajectory and speed weight changes weight saliency weight value