Multiple Classifier Systems: Second International Workshop, MCS 2001 Cambridge, UK, July 2-4, 2001 Proceedings
Josef Kittler, Fabio Roli
Springer, Aug 9, 2001 - Machine learning - 456 pages
This book constitutes the refereed proceedings of the Second International Workshop on Multiple Classifier Systems, MCS 2001, held in Cambridge, UK in July 2001. The 44 revised papers presented were carefully reviewed and selected for presentation. The book offers topical sections on bagging and boosting, MCS design methodology, ensemble classifiers, feature spaces for MCS, MCS in remote sensing, one class MCS and clustering, and combination strategies.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Bagging and Boosting
A Generalized Class of Boosting Algorithms Based on Recursive Decoding
Learning Classification RBF Networks by Boosting
21 other sections not shown
Other editions - View all
accuracy AdaBoost algorithm applied approach architecture average bagging base classifier Berlin Heidelberg 2001 binary boosting class label classifier combination cluster combination rule combining classifiers complexity computed correlation data set decision trees density dimensionality distance distribution Duin ECOC ensemble error rate estimated example experiments feature set feature space feature vectors function Gaussian genetic algorithm IEEE Transactions images improvement individual classifiers input decimation k-means algorithm Kittler and F learner linear linear classifiers LNCS Machine Learning majority voting matrix MCSs microcalcifications Multiple Classifier Systems mutual information neural networks number of classifiers obtained optimal output errors overfitting parameters partition Pattern Recognition performance pixel posterior probabilities prediction proposed Radon transformation representation ROC curve Roli Eds Section selection single classifier speaker recognition statistical strategy subsets Support Vector Machines Table techniques test set threshold training data training sample training set transform values weak classifiers weights