Estimation by the Nearest Neighbor Rule

Front Cover
Systems Theory Laboratory, Stanford Electronics Laboratories, Stanford University, 1966 - Estimation theory - 27 pages
The nearest-neighbor estimate of the random parameter associated with a given observation is defined to be the parameter associated with the nearest observation in some training set. This paper is concerned with the infinite parameter problem (estimation) as opposed to the finite parameter problem (classification). Because of the unboundedness of the loss function in the general estimation problem, certain new considerations are required. For a wide range of probability distributions, the large-sample risk of the nearest-neighbor estimate is shown here to be less than twice the Bayes risk for metric loss functions and equal to twice the Bayes risk for squared-error loss functions. In this sense, at least half the information in the training set is contained in the nearest neighbor. (Author).

Contents

Section 1
3
Section 2
6
Section 3
11
Section 4
13
Section 5
15

Bibliographic information