The Computational Complexity of Machine Learning
The Computational Complexity of Machine Learning is a mathematical study of the possibilities for efficient learning by computers. It works within recently introduced models for machine inference that are based on the theory of computational complexity and that place an explicit emphasis on efficient and general algorithms for learning. Theorems are presented that help elucidate the boundary of what is efficiently learnable from examples. These results take the form of both algorithms with proofs of their performance, and hardness results demonstrating the intractability of learning in certain natural settings. In addition the book contains lower bounds on the resources required for learning, an extensive study of learning in the presence of errors in the sample data, and several theorems demonstrating reducibilities between learning problems. Michael J. Kearns is Postdoctoral Associate in the Laboratory for Computer Science at MIT. Contents: Definitions, Notations, and Motivation. Overview of Recent Research in Computational Learning Theory. Useful Tools for Distribution-Free Learning. Learning in the Presence of Errors. Lower Bounds on Sample Complexity. Cryptographic Limitations on Polynomial-Time Learning. Distribution-Specific Learning in Polynomial Time. Equivalence of Weak Learning and Group Learning.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Recent Research in Computational Learning Theory
Tools for Distributionfree Learning
Learning in the Presence of Errors
Lower Bounds on Sample Complexity
Cryptographic Limitations on Polynomialtime Learning
Distributionspecific Learning in Polynomial Time
Equivalence of Weak Learning and Group Learning
algorithm for learning Alice and Bob Angluin approximation algorithm assumptions Blum integers Blumer Boolean circuits Boolean formulae Chapter class of Boolean class of monomials coloring problem computational learning theory concept class Corollary cryptographic define distribution-free model e-good hypothesis Fact CB1 fc-TERM-DNF fcCNF fcDNF fixed follows given hardness results Haussler high probability hypothesis class input Kearns learning problem Lemma lower bound LSB(x machine learning malicious error rate mod q modulo monomial monotone neg(c negative examples negative-only Note number of examples obtain oracle output Pitt and Warmuth polynomial-time algorithm polynomial-time learning algorithm polynomial-time reducibility polynomially evaluatable polynomially learnable pos(c positive examples positive-only probability at least prove quadratic residues random representation class respectively Rivest sample complexity satisfying set cover set cover problem significant monomial simulation target distribution D+ target representation Theorem 3.1 tolerated trapdoor function upper bound Valiant 93 Vapnik-Chervonenkis dimension vector weak learning weakly learnable
Page 155 - O. Goldreich, S. Goldwasser, S. Micali, How to Construct Random Functions, Journal of ACM, vol.