An Introduction to Computational Learning Theory

Front Cover
MIT Press, 1994 - Computers - 207 pages
1 Review

Emphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics.Computational learning theory is a new and rapidly expanding area of research that examines formal models of induction with the goals of discovering the common methods underlying efficient learning algorithms and identifying the computational impediments to learning.Each topic in the book has been chosen to elucidate a general principle, which is explored in a precise formal setting. Intuition has been emphasized in the presentation to make the material accessible to the nontheoretician while still providing precise arguments for the specialist. This balance is the result of new proofs of established theorems, and new presentations of the standard proofs.The topics covered include the motivation, definitions, and fundamental results, both positive and negative, for the widely studied L. G. Valiant model of Probably Approximately Correct Learning; Occam's Razor, which formalizes a relationship between learning and data compression; the Vapnik-Chervonenkis dimension; the equivalence of weak and strong learning; efficient learning in the presence of noise by the method of statistical queries; relationships between learning and cryptography, and the resulting computational limitations on efficient learning; reducibility between learning problems; and algorithms for learning finite automata from active experimentation.

  

What people are saying - Write a review

We haven't found any reviews in the usual places.

Contents

Occams Razor
31
The VapnikChervonenkis Dimension
49
Weak and Strong Learning
73
Learning in the Presence of Noise
103
Inherent Unpredictability
123
Reducibility in PAC Learning
143
Learning Finite Automata by Experimentation
155
Some Tools for Probabilistic Analysis
189
Bibliography
193
Copyright

Common terms and phrases

Popular passages

Page 195 - A. Blum, M. Furst, J. Jackson, M. Kearns, Y. Mansour, and S. Rudich. Weakly learning DNF and characterizing statistical query learning using fourier analysis. In Proceedings of the 26th Annual ACM Symposium on Theory of Computing, pages 253-262, May 1994.
Page 194 - SE Decatur. General bounds on statistical query learning and PAC learning with noise via hypothesis boosting. In Proceedings of the 34th Annual Symposium on Foundations of Computer Science, pages 282-291, Nov.
Page 193 - YS Abu-Mostafa, The Vapnik-Chervonenkis Dimension: Information versus Complexity in Learning, Neural Computation, Vol 1, pp 312-317, 1989.
Page 194 - Computing (1993) 374-385. [2] D. Angluin and M. Kharitonov. When won't membership queries help? In Proceedings of the 23rd Annual ACM Syposium on Theory of Computing (1991), 444-454.
Page 195 - In David S. Touretzky, editor, Advances in Neural Information Processing Systems I, pages 494-501.
Page 194 - D. Angluin, L. Hellerstein, and M. Karpinski. Learning Read-Once Formulas with Queries.
Page 194 - Berman and R. Roos. Learning one-counter languages in polynomial time. In Proceedings of the 28th IEEE Symposium on the Foundations of Computer Science, pages 61-67.
Page 197 - Freund. Boosting a weak learning algorithm by majority. In Proceedings of the Third Annual Workshop on Computational Learning Theory, pages 202216.
Page 197 - Y. Freund. An improved boosting algorithm and its implications on learning complexity.
Page 197 - Kearns, D. Ron, R. Rubinfeld, RE Schapire, and L. Sellie. Efficient Learning of Typical Finite Automata from Random Walks.

References to this book

All Book Search results »

About the author (1994)

Michael J. Kearns is Professor of Computer and Information Science at the University of Pennsylvania.

Bibliographic information