## Mathematical Perspectives on Neural NetworksRecent years have seen an explosion of new mathematical results on learning and processing in neural networks. This body of results rests on a breadth of mathematical background which even few specialists possess. In a format intermediate between a textbook and a collection of research articles, this book has been assembled to present a sample of these results, and to fill in the necessary background, in such areas as computability theory, computational complexity theory, the theory of analog computation, stochastic processes, dynamical systems, control theory, time-series analysis, Bayesian analysis, regularization theory, information theory, computational learning theory, and mathematical statistics. Mathematical models of neural networks display an amazing richness and diversity. Neural networks can be formally modeled as computational systems, as physical or dynamical systems, and as statistical analyzers. Within each of these three broad perspectives, there are a number of particular approaches. For each of 16 particular mathematical perspectives on neural networks, the contributing authors provide introductions to the background mathematics, and address questions such as: * Exactly what mathematical systems are used to model neural networks from the given perspective? * What formal questions about neural networks can then be addressed? * What are typical results that can be obtained? and * What are the outstanding open problems? A distinctive feature of this volume is that for each perspective presented in one of the contributed chapters, the first editor has provided a moderately detailed summary of the formal results and the requisite mathematical concepts. These summaries are presented in four chapters that tie together the 16 contributed chapters: three develop a coherent view of the three general perspectives -- computational, dynamical, and statistical; the other assembles these three perspectives into a unified overview of the neural networks field. |

### What people are saying - Write a review

We haven't found any reviews in the usual places.

### Contents

Computational Dynamical and Statistical | 1 |

Stan Franklin Institute for Intelligent Systems Department of Mathematical | 10 |

Computational Perspectives | 17 |

Computation by Discrete Neural Nets | 41 |

Circuit Complexity and Feedforward Neural Networks | 85 |

Complexity of Learning | 113 |

Deterministic and Randomized Local Search | 143 |

Dynamical Perspectives on Neural Networks | 245 |

Regularization in Neural Nets | 497 |

The Basic Theory | 533 |

Information Theory and Neural Nets | 567 |

Hidden Markov Models and Some Connections | 603 |

Probably Approximately Correct Learning | 651 |

Richard Golden School of Human Development University of Texas at Dallas | 688 |

715 | |

Parametric Statistical Estimation with Artificial | 719 |

Dynamical Systems | 271 |

Statistical Analysis of Neural Networks | 325 |

Neural Networks in Control Systems | 347 |

Time Series Analysis and Prediction | 395 |

Statistical Perspectives on Neural Networks | 453 |

Inductive Principles of Statistics and Learning Theory | 777 |

Author Index | 843 |

855 | |

### Other editions - View all

Mathematical Perspectives on Neural Networks Paul Smolensky,Michael C. Mozer,David E. Rumelhart Limited preview - 2013 |

Mathematical Perspectives on Neural Networks Paul Smolensky,Michael C. Mozer,David E. Rumelhart Limited preview - 2013 |

Mathematical Perspectives on Neural Networks Paul Smolensky,Michael C. Mozer No preview available - 2015 |

### Common terms and phrases

Aarts activation alternating circuit analog applied approximation architecture assumption asymptotic attractor bandwidth behavior Boltzmann machine Boolean bounded cellular automata chapter code length complexity configuration connectionist decision rule defined definition denotes density depth discrete discussed dynamical systems dynetic empirical risk encoding equilibrium error error function estimation example exponential fan-in feedforward finite Garzon gates Gaussian given global graph halting problem hidden units IEEE implemented infinite input iteration learning algorithm learning problem linear local search loss function Markov chain mathematical matrix measure methods metric minimizing neural nets neural networks node function nonlinear obtained optimal output PAC learnable parallel parameters pattern perceptron polynomial prediction processing properties random real numbers representation Rumelhart sample sequence simulated annealing solution solve space stability statistical stochastic structure Theorem theory Turing machine uniform convergence Vapnik variables vector Weigend weights