## Computational Learning TheoryComputational learning theory is one of the first attempts to construct a mathematical theory of a cognitive process. It has been a field of much interest and rapid growth in recent years. This text provides a framework for studying a variety of algorithmic processes, such as those currently in use for training artificial neural networks. The authors concentrate on an approximate model for learning and gradually develop the ideas of efficiency considerations. Finally, they consider applications of the theory to artificial neural networks. An abundance of exercises and an extensive list of references round out the text. This volume provides a comprehensive review of the topic, including information drawn from logic, probability, and complexity theory. It forms a solid introduction to the theory of comptutational learning suitable for a broad spectrum of graduate students from theoretical computer science to mathematics. |

### What people are saying - Write a review

We haven't found any reviews in the usual places.

### Contents

II | 1 |

III | 2 |

IV | 3 |

V | 5 |

VI | 8 |

VII | 9 |

VIII | 11 |

IX | 13 |

XXXII | 61 |

XXXIII | 64 |

XXXIV | 67 |

XXXV | 69 |

XXXVI | 73 |

XXXVII | 74 |

XXXVIII | 83 |

XXXIX | 84 |

### Other editions - View all

Computational Learning Theory: An Introduction Martin Anthony,Norman Biggs No preview available - 1992 |

### Common terms and phrases

algorithm for H Blumer boolean functions boolean spaces Chapter characteristic functions computation nodes Computational Learning Theory concept space confidence and accuracy consistency problem consistent learning algorithm contains decision list defined delete denote described disjunction disjunctive normal form DL(K doubly-graded efficient pac efficient with respect epac example space feedforward finite VC dimension follows formula graded hypothesis space graded space graph greedy algorithm Haussler hypothesis h hypothesis space Kearns labelled examples Let H literals machine Machine Learning monomial negative examples notation NP-complete NP-hard Occam algorithm output hypothesis pac learnable pac learning algorithm Pitt positive examples positive integer potentially learnable probability distribution probably approximately correct prove randomised algorithm real numbers real perceptron respect to example result running time polynomial running time RL(m,n sample complexity sample of length Section shattered space H subcover subset target concept Theorem training sample upper bound Valiant VCdim vector Warmuth weight-vector