## Machine Learning, ECML- ...: ProceedingsSpringer, 2004 - Induction (Logic) |

### From inside the book

Results 1-3 of 87

Page 266

5 Double

based on the examples that are covered by ... if these newly

rule conflicts between them Recursive

using ...

5 Double

**Induction**The idea of Double**Induction**( 8 ) is to**induce**new rulesbased on the examples that are covered by ... if these newly

**induced**rules haverule conflicts between them Recursive

**Induction**does yet another**induction**,using ...

Page 413

An important criterion , when

comprehensibility . For example , if the generalized trajectory is

tree learning , the class values should be state variables that are intuitive to

human thinking ...

An important criterion , when

**inducing**the generalized trajectory , iscomprehensibility . For example , if the generalized trajectory is

**induced**throughtree learning , the class values should be state variables that are intuitive to

human thinking ...

Page 449

A possible explanation for this is that CIPER

over the entire data set , as opposed to a number of partial models

data subsets in model trees . Finally , to our surprise , the overall accuracy of

CIPER ...

A possible explanation for this is that CIPER

**induces**a single equation / modelover the entire data set , as opposed to a number of partial models

**induced**fordata subsets in model trees . Finally , to our surprise , the overall accuracy of

CIPER ...

### What people are saying - Write a review

We haven't found any reviews in the usual places.

### Contents

Random Matrices in Data Analysis | 1 |

Data Privacy | 8 |

David J Hand | 18 |

Copyright | |

36 other sections not shown

### Other editions - View all

### Common terms and phrases

accuracy action agent algorithm analysis answer applied approach approximation attributes average better called classification clusters combination common compared complexity compute concept conditional Conference considered covered dataset decision trees defined denotes depends described distance distribution domain effect equation error estimate evaluation examples experiments extraction Figure filter function given graph improve increase induction input instances International Italy kernel labels linear Machine Learning matrix means measure messages method naive Bayes negative node Note obtained operator optimal pairs parameters performance positive possible prediction present probability problem Proceedings proposed random ranking reduced References regression representation represents requires respectively reward rule sample selection shown similar space standard statistics step Table task tree variables vector weighted