Advanced Methods in Neural ComputingFollowing up where Neural Computing: Theory and Practice left off, this guide explains diverse high-performance paradigms for artificial neural networks (ANNs) that function effectively in real-world situations. The tutorial approach, use of standardized notation, undergraduate-level mathematics, and extensive examples explain methods for solving practical neural network engineering problems in a clear and comprehensible manner. Emphasis is given to paradigms that perform well rather than those of academic interest. Explanations of the paradigms are program-oriented and are written in algorithmic form. Self-contained chapters cover field theory methods, including Nestor's restricted coulomb energy system; probabilistic neural networks, which can increase training speed by orders of magnitude; genetic algorithms that mimic biological evolution; sparse distributed memory, a powerful associative memory paradigm, which is compatible with VLSI implementation; fuzzy logic methods that are finding widespread application in control systems; neural engineering, including a set of techniques for designing, training, and applying artificial neural systems to real-world problems; and additional chapters cover basis function methods, chaos, and automatic control. Most of the paradigms presented have been used by the author in actual applications. Paradigms that are still in the research stage, but offer great potential, are also discussed. Advanced Methods in Neural Computing meets the reference needs of electronics engineers, control systems engineers, programmers, and others in scientific disciplines. |
Contents
Preface | 1 |
Field Theory Methods | 14 |
Probabilistic Neural Networks | 35 |
Copyright | |
9 other sections not shown
Common terms and phrases
accuracy adaptive applications approximation artificial neural networks attractor backpropagation basin of attraction basis-function binary bit position calculated chaotic chaotic attractor characteristics classifier complexity Conference on Neural convergence crossover D. S. Touretzky decision regions defined dimension distance Equation error estimate example exponential feature vector feed-forward Figure fuzzy logic fuzzy set fuzzy systems Gaussian genetic algorithms gradient gradient descent GRNN hidden layer neuron human identification model IEEE input vector large number learning linear membership function method Morgan Kaufmann Neural Computation nonlinear number of training objective function olfactory system operation optimal output layer output vector paradigms Parzen windows pattern performance polynomial population problem produce random receptive field represent response rule San Mateo schema space Sparse Distributed Memory Specht step strings target vector techniques test charge test vector theory tion Touretzky training charge training set training vector variables vector component weights