By Kardi Teknomo, PhD .

KNN e-book

< Previous | Next | Contents >

Purchase the complete E-book of this tutorial here

K Nearest Neighbor Algorithm for Classification

Let us start with K-nearest neighbor algorithm for classification. K-nearest neighbor is a supervised learning algorithm where the result of new instance query is classified based on majority of K-nearest neighbor category. The purpose of this algorithm is to classify a new object based on attributes and training samples. The classifiers do not use any model to fit and only based on memory. Given a query point, we find K number of objects or (training points) closest to the query point. The classification is using majority vote among the classification of the K objects. Any ties can be broken at random. K Nearest neighbor algorithm used neighborhood classification as the prediction value of the new query instance.

For example

We have data from the questionnaires survey (to ask people opinion) and objective testing with two attributes (acid durability and strength) to classify whether a special paper tissue is good or not. Here is four training samples

X1 = Acid Durability (seconds)

X2 = Strength

(kg/square meter)

Classification

7

7

Bad

7

4

Bad

3

4

Good

1

4

Good

Now the factory produces a new paper tissue that pass laboratory test with X1 = 3 and X2 = 7. Without another expensive survey, can we guess what the classification of this new tissue is? Fortunately, k nearest neighbor (KNN) algorithm can help you to predict this type of problem.

Read it off line on any device. Click here to purchase the complete E-book of this tutorial

Give your feedback and rate this tutorial

< Previous | Next | Contents >

This tutorial is copyrighted .

Preferable reference for this tutorial is

Teknomo, Kardi. K-Nearest Neighbors Tutorial. https:\\people.revoledu.com\kardi\tutorial\KNN\