## Introduction to the Theory of Neural ComputationComprehensive introduction to the neural network models currently under intensive study for computational applications. It also provides coverage of neural network applications in a variety of problems of both theoretical and practical interest. |

### What people are saying - Write a review

We haven't found any reviews in the usual places.

### Contents

ONE Introduction | 1 |

TWO The Hopfield Model | 11 |

THREE Extensions of the Hopfield Model | 43 |

FOUR Optimization Problems | 71 |

SIX MultiLayer Networks | 115 |

SEVEN Recurrent Networks | 163 |

EIGHT Unsupervised Hebbian Learning | 197 |

NINE Unsupervised Competitive Learning | 217 |

TEN Formal Statistical Mechanics of Neural Networks | 251 |

APPENDIX Statistical Mechanics | 275 |

281 | |

307 | |

321 | |

### Other editions - View all

### Common terms and phrases

algorithm applied approach appropriate architecture attractor average back-propagation binary bits Boltzmann machine calculate Chapter competitive learning computation connection strengths consider context units continuous-valued convergence cost function defined discussed dynamics eigenvalues eigenvector energy function equations equilibrium error example factor feature mapping feed-forward feed-forward networks FIGURE finite Gaussian given gives gradient descent Hebb rule Hebbian learning hidden layer hidden units Hopfield network implementation independent input patterns input space input units input vector Kohonen learning rule linear linearly magnetic matrix mean field mean field theory memory minimize neural networks neurons nonlinear Oja's optimization output layer output units parameters particular patterns f perceptron possible principal component probability problem random receptive fields recurrent network reinforcement result Section sequence shown in Fig shows signal simple perceptron solution solved spin stable statistical mechanics stochastic stochastic network subspace symmetric temperature term training set unsupervised learning values weight space weight vector zero