UWEE Tech Report Series

Minimum Expected Risk Estimation for Near-neighbor Classification


UWEETR-2006-0006

Author(s):
Maya Gupta, Santosh Srivastava, Luca Cazzanti

Keywords:
near-neighbor learning, local learning, Bayesian estimation, LIME

Abstract

We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors (kNN). This paper investigates minimum expected risk estimates for neighborhood learning methods. We give analytic solutions for the minimum expected risk estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights (tricube kernel), and asymmetric weights (LIME kernel). Also, it is shown that if the uncertainty in the class probability is modeled by a random variable, and the expected misclassification cost is minimized, the result is equivalent to using a classifier with a minimum expected risk estimate. For symmetric costs and uniform priors, it is seen that minimum expected risk estimates have no advantage over the standard maximum likelihood estimates. For asymmetric costs, simulations show that the differences can be striking.

Download the PDF version