An Introduction to Neural Networks
Though mathematical ideas underpin the study of neural networks, the author presents the fundamentals without the full mathematical apparatus. All aspects of the field are tackled, including artificial neurons as models of their real counterparts; the geometry of network action in pattern space; gradient descent methods, including back-propagation; associative memory and Hopfield nets; and self-organization and feature maps. The traditionally difficult topic of adaptive resonance theory is clarified within a hierarchical description of its operation. The book also includes several real-world examples to provide a concrete focus. This should enhance its appeal to those involved in the design, construction and management of networks in commercial environments and who wish to improve their understanding of network simulator packages. As a comprehensive and highly accessible introduction to one of the most important topics in cognitive and computer science, this volume should interest a wide range of readers, both students and professionals, in cognitive science, psychology, computer science and electrical engineering.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Chapter One Neural networksan overview
Chapter Two Real and artificial neurons
Chapter Three TLUs linear separability and vectors
the perceptron rule
Chapter Five The delta rule
Chapter Six Multilayer nets and backpropagation
the Hopfield net
Chapter Eight Selforganization
Other editions - View all
action potential activation Aleksander algorithm ART1 artificial neurons assigned axon backpropagation behaviour biological Boolean functions Carpenter & Grossberg cell Chapter classification clusters competitive dynamics components connection connectionist Consider decision surface defined delta rule dendrites described developed digital nodes encoded energy error example feedforward gradient descent hidden layer hidden nodes hidden units Hopfield Hopfield net hyperplane implementation initial inner product input pattern input space input vector Kohonen learning rate learning rule linear linearly separable mathematical membrane negative neural nets neural networks normalization output function output layer output node parameters pattern space positive possible postsynaptic problem PSPs recurrent region resonance response result self-organization semilinear nodes shown in Figure sigmoid signal simple single slope stored structure Suppose symbols synapse target template threshold training algorithm training patterns training set update values weight space weight vector zero