Introduction to the Theory of Neural Computation
Comprehensive introduction to the neural network models currently under intensive study for computational applications. It also provides coverage of neural network applications in a variety of problems of both theoretical and practical interest.
What people are saying - Write a review
We haven't found any reviews in the usual places.
TWO The Hopfield Model
THREE Extensions of the Hopfield Model
FOUR Optimization Problems
SIX MultiLayer Networks
SEVEN Recurrent Networks
EIGHT Unsupervised Hebbian Learning
Other editions - View all
algorithm applied approach appropriate architecture attractor average back-propagation binary bits Boltzmann machine calculate Chapter competitive learning computation connection strengths consider context units continuous-valued convergence cost function defined discussed dynamics eigenvalues eigenvector energy function equations equilibrium error example factor feature mapping feed-forward feed-forward networks FIGURE finite Gaussian given gives gradient descent Hebb rule Hebbian learning hidden layer hidden units Hopfield network implementation independent input patterns input space input units input vector Kohonen learning rule linear linearly magnetic matrix mean field mean field theory memory minimize neural networks neurons nonlinear Oja's optimization output layer output units parameters particular patterns f perceptron possible principal component probability problem random receptive fields recurrent network reinforcement result Section sequence shown in Fig shows signal simple perceptron solution solved spin stable statistical mechanics stochastic stochastic network subspace symmetric temperature term training set unsupervised learning values weight space weight vector zero