The k-NN Classifier Algorithm

Given a training set X_train with labels y_train, and given a new instance x_test to be classified:

The k-NN Classifier simply memorises the entire training set. And then to classify a new instance does 3 steps.

  1. First, it finds the k-Nearest most similar instances to the new instance in the training set.
  2. Then it gets the labels of those training instances.
  3. Predicts the label of the new instance as a function of the nearby training labels typically by a simple majority vote.

Here’s how a k-Nearest Neighbour Classifier using only one nearest neighbour, that is with k equal to 1, makes these predictions for the simple binary synthetic dataset. Here we’re applying the nearest neighbours classifier to our simple binary classification problem. Where the points in class zero are labelled with yellow dots and the points in class one are labelled with black dots.

 

If you run this code and compare the resulting training and test scores for k equals 1, 3, and 11, which are shown in the title of each plot, you can see the effect of model complexity on a models ability to generalize. Here we are also showing how the entire feature space is been broken up into different decision regions according to predictions that the k-NN classifier would make at each point in the decision space.

 

You can see that the one nearest neighbours classifier is over-fitting the training data in this case. It’s trying to get correct predictions for every single training point while ignoring the generalised trend between the two classes.

In the k = 1 case, the training score is a perfect 100%. But the test score is only 64%. As k increases to 3, the training score drops to 80% but the test score rises slightly to 72%, indicating the model is generalizing better to new data. When k = 11, the training score drops a bit further to 73%, but the test score even better at 80%, indicating that this simple model is much more effective at ignoring minor variations in training data.

The best k-NN model is the sweet spot where test set score is maximum.

The k-Nearest Neighbour Classifiers can be applied to any number of classes, not just 2. For further parameters of k-NN and their effects on decision models please visit sklearn k-NN documentation.

The nearest neighbours approach isn’t useful just for classification. You can use it for regression too. In our next blog, we will see the effects of k-NN regression modelling. Happy classification 🙂