Mathematical Perspectives on Neural Networks
Paul Smolensky, Michael C. Mozer, David E. Rumelhart
Psychology Press, 1996 - Computers - 862 pages
Recent years have seen an explosion of new mathematical results on learning and processing in neural networks. This body of results rests on a breadth of mathematical background which even few specialists possess. In a format intermediate between a textbook and a collection of research articles, this book has been assembled to present a sample of these results, and to fill in the necessary background, in such areas as computability theory, computational complexity theory, the theory of analog computation, stochastic processes, dynamical systems, control theory, time-series analysis, Bayesian analysis, regularization theory, information theory, computational learning theory, and mathematical statistics.
Mathematical models of neural networks display an amazing richness and diversity. Neural networks can be formally modeled as computational systems, as physical or dynamical systems, and as statistical analyzers. Within each of these three broad perspectives, there are a number of particular approaches. For each of 16 particular mathematical perspectives on neural networks, the contributing authors provide introductions to the background mathematics, and address questions such as:
* Exactly what mathematical systems are used to model neural networks from the given perspective?
* What formal questions about neural networks can then be addressed?
* What are typical results that can be obtained? and
* What are the outstanding open problems?
A distinctive feature of this volume is that for each perspective presented in one of the contributed chapters, the first editor has provided a moderately detailed summary of the formal results and the requisite mathematical concepts. These summaries are presented in four chapters that tie together the 16 contributed chapters: three develop a coherent view of the three general perspectives -- computational, dynamical, and statistical; the other assembles these three perspectives into a unified overview of the neural networks field.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Computational Dynamical and Statistical
Stan Franklin Institute for Intelligent Systems Department of Mathematical
Computation by Discrete Neural Nets
Circuit Complexity and Feedforward Neural Networks
Complexity of Learning
Deterministic and Randomized Local Search
Dynamical Perspectives on Neural Networks
Regularization in Neural Nets
The Basic Theory
Information Theory and Neural Nets
Hidden Markov Models and Some Connections
Probably Approximately Correct Learning
Richard Golden School of Human Development University of Texas at Dallas
Parametric Statistical Estimation with Artificial
Other editions - View all
Aarts activation alternating circuit analog applied approximation architecture assumption asymptotic attractor bandwidth behavior Boltzmann machine Boolean bounded cellular automata chapter code length complexity configuration connectionist decision rule defined definition denotes density depth discrete discussed dynamical systems dynetic empirical risk encoding equilibrium error error function estimation example exponential fan-in feedforward finite Garzon gates Gaussian given global graph halting problem hidden units IEEE implemented infinite input iteration learning algorithm learning problem linear local search loss function Markov chain mathematical matrix measure methods metric minimizing neural nets neural networks node function nonlinear obtained optimal output PAC learnable parallel parameters pattern perceptron polynomial prediction processing properties random real numbers representation Rumelhart sample sequence simulated annealing solution solve space stability statistical stochastic structure Theorem theory Turing machine uniform convergence Vapnik variables vector Weigend weights