K-Nearest Neighbors: Part 1 Introduction

Walid Hadri
7 min readApr 1, 2021

--

This is Part 1 of the series of lectures dedicated to the K-Nearest Neighbors Algorithm. The goal is to introduce the way KNN works, some of the points mentioned here will be seen in details in the next lectures.

K-Nearest Neighbors is a non-parametric, non-probabilistic, discriminative, lazy supervised learning algorithm used for classification and regression.

Non-parametric: because KNN does not make any assumptions about the distribution of the data.
Non-probilistic: it doesn’t produce the probability of membership of any data point rather KNN classifies the data on hard assignment. (For classification, membership to a class, for regression membership to an intervall).
Discriminative: KNN is considered to be discriminative, because we can draw discriminant boundaries for classification, some people considered it to be generative since we store all the data (generic case), hence we are able to generate some new data. We can say that KNN falls between the two. But, the discriminative side is the most important one.
Lazy learner: Because it does not have a training phase. All data (generic case) is used at the test phase and the prediction phase (called also instance-based algorithm).

Among the application of KNN: Recommendation Systems (used by Youtube, Facebook and Netflix), Fraud detection (through outliers’ detection) and Document classification based on semantic.

The lecture goes as follow:

1.Similarity
2. K the number of neighbors and Decision strategy
3. KNN algorithm formulation
4. Advantages and Drawbacks
5. Decision Boundaries and picking K
6. Improving KNN

1. Similarity

The KNN algorithm makes the assumption that similar inputs have similar outputs. So basically the similarity between two instances is something that we can measure, and instances with high similarity are supposed to have the same (close for regression) label. We need then a function to calculate the similarity. Among the most used function, there is distance functions, but you can define your own function that makes you able to measure the similarity between your instances more precisely. For instance, suppose you have a set of cars {A,B,C} with given characteristics, and according to you A is more similar to B then it is to C, you can define a function h, so that based on the characteristics of the cars you have h(A,B) < h(A,C). Let’s now see some of the most used distance functions:

Minkowski Distance:
Minkowski distance is used for distance similarity of a vector. Given two or more vectors, it finds distance similarity of these vectors. (p>0)

Types of Distances in Machine Learning | LaptrinhX

p = 1, Manhattan Distance

p = 2, Euclidean Distance

p = ∞, Chebychev Distance

Cosine Distance and cosine similarity:
The cosine metric measures the degree of angle between two vectors. This particular metric is used when the distance between vectors does not matter but the orientation. The cosine similarity is advantageous because even if the vectors are far apart by the distance, chances are they may still be oriented closer together. The smaller the angle, the higher the cosine similarity. It is often used to measure document similarity in text analysis.

Let theta be the angle between two vectors, then
CosineSimilarity= cos(theta)
CosineDistance =1-cos(theta)

Mahalanobis Distance:
The Mahalanobis distance is a measure of the distance between a point P and a distribution D. It is a multi-dimensional generalization of the idea of measuring how many standard deviations away P is from the mean of D.

The Mahalanobis distance measures distance relative to the centroid — a base or central point which can be thought of as an overall mean for multivariate data. The centroid is a point in multivariate space where all means from all variables intersect. The larger the MD, the further away from the centroid the data point is. The most common use for the Mahalanobis distance is to find multivariate outliers, which indicates unusual combinations of two or more variables. For example, it’s fairly common to find a 6′ tall woman weighing 185 lbs, but it’s rare to find a 4′ tall woman who weighs that much.

Hamming Distance:
The Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different.

There are many distances out there, you can define your own one if you think its suits better your problem. The most important thing to keep in mind is that this function needs to be as much informative as possible about the similarity between your instance/vectors.

2. K the number of neighbors and Decision strategy

One we define our similarity function, the question is how many points should we consider for evaluating a new point. This is the hyperparameter K. K nearest neighbors, are the K closest points in terms of the distance function defined. Suppose we set K= 1, then we only consider the closest point, and if we set K=number of training examples, then we consider all the points. We will see the effect of K and how can we change an optimal one.

Suppose we have now the K nearest points with their labels. Here there many strategies that we can consider depending on the problem and the priorities you define.

For example, suppose we are dealing with a classification problem, that among the K points, I can assign to the new point the mode. I can also consider some kind of weights based on the distance between the point and each neighbor to assign a class. I can also do some vote to favor some class when its exists, for example if I am dealing with some fraud detection and once I get one of the neighbors as a positive class, I want to assign a positive class to the new point.
For example, for a regression problem, I can use a simple mean or a weighted sum….

3. KNN algorithm formulation

So the steps are as follow:

  1. Find the K-closest points among the training points to the new point.
  2. Based on the decision strategy and the K-closest points, assign a label to the new point.

4. Advantages and Drawbacks

Some pros of KNN are:

. Simple algorithm and easy to interpret with few hypyerparameters: K and the distance function mainly.
. No assumptions about data — no need to make additional assumptions. This makes it crucial in nonlinear data case.
. No training period and data can be added/updated anytime we want.

Some cons of KNN are:

. Calculating distances between each data instance would be very costly, especially for large datesets, and because for each point we have to loop over the whole dataset
. It makes the assumptions that close points should have same/close/similar labels
. Sensitive to outliers and missing data
. Performs badly in high dimension spaces
. Sensitive to the scale of the data and irrelevant features
. Requires memory — need to store all the training data
. Not probabilistic: does not give the probability to have some label for a given input
. Performs badly with imbalanced datasets

The curse of dimensionality:
As mentioned before, KNN uses the assumption that similar points are closer to each others, that means closer on each axis, and each new axis added, by adding a new dimension, makes it harder and harder for two specific points to be close to each other in every axis.

5. Decision Boundaries and picking K

Let us now talk about the classification case, so we can define the decision boundaries. A way of understanding KNN is by thinking about it as calculating decision boundaries based on data points and K, which are then used to classify new points. Intuitively, you can think of K as controlling the shape of the decision boundaries.

By fixing our K, we define the decision boundaries. For K = 1, the decision boundary has sharp edges and non-smooth curves. We can see that there is overfitting.

For K = 5, the decision bounbary has a smooth curve and we are not overfitting.

So small K means low bias and high variance, and large K means high bias and low variance.

So how do we choose K? Choosing K is unique to every dataset. There is no standard statistical method to compute the most optimal K value. Choosing K depends on our knowledge about the domain. We can use Cross-Validation to compare the accuracies for different values of K (Elbow method).

6. Improving KNN

There are various ways to improve the KNN algorithm:

  • Change the metric distance
  • Dimensionality Reduction
  • Rescaling and normalization
  • Approximate Nearest Neighbor techniques such Kd-trees, Branch and Bound…
  • Use Locality Sensitive Hashing

This is it for the first part for the KNN algorithm. Next lectures will discuss more adavanced topics related to this algorithm.

--

--