Neural Networks: A Systematic IntroductionNeural networks are a computing paradigm that is finding increasing attention among computer scientists. In this book, theoretical laws and models previously scattered in the literature are brought together into a general theory of artificial neural nets. Always with a view to biology and starting with the simplest nets, it is shown how the properties of models change when more general computing elements and net topologies are introduced. Each chapter contains examples, numerous illustrations, and a bibliography. The book is aimed at readers who seek an overview of the field or who wish to deepen their knowledge. It is suitable as a basis for university courses in neurocomputing. |
Contents
3 | |
Threshold Logic 29 | 28 |
Weighted Networks The Perceptron | 55 |
Perceptron Learning | 77 |
Unsupervised Learning and Clustering Algorithms | 99 |
One and Two Layered Networks 123 | 122 |
The Backpropagation Algorithm | 149 |
Fast Learning Algorithms | 183 |
Statistics and Neural Networks | 227 |
The Complexity of Learning 263 | 262 |
Fuzzy Logic | 287 |
Associative Networks | 309 |
The Hopfield Model | 335 |
Genetic Algorithms 427 | 426 |
Hardware for Neural Networks | 449 |
Other editions - View all
Common terms and phrases
approximation artificial neural networks Assume backpropagation backpropagation algorithm backpropagation step binary Boltzmann Boltzmann machine Boolean cell cluster coding complexity components computing units connections convergence corresponds defined denote distribution edges elements energy function equation error function example feed-forward finite function f fuzzy set genetic algorithms given gradient descent Hebbian learning hidden layer hidden units Hopfield network hyperplanes i-th implemented input space input vector iteration kind Kohonen network learning algorithm learning problem linear associator linear separation logical functions McCulloch—Pitts method minimize minimum n-dimensional network function neurons node nonlinear operators optimal output unit parameters partial derivative patterns perceptron perceptron learning phoneme points polynomial possible probability processor produce pseudoinverse randomly recursive result selected shown in Figure shows sigmoid signals solved stochastic strings systolic array threshold training set transformed transition two-dimensional unsupervised learning update variables visual weight matrix weight space weight vector zero