Nearest Neighbor Decision Rules with Proximity Graphs

I found an interesting introductory course on this theme (via), which summarize different techniques for decision rule class membership attribution. These can be divided into parametric and non parametric depending on the fact that the choice for the assignment is computed using the probability or an euclidean metrics criteria.

Interestingly enough, besides being extremely simple to implement, the probability of error of those non-parametric variants, is sufficiently close to the Bayes (optimal) probability of error. Sergei Savchenko, presents in the document cited above, a compilation of those methods, in an order based on the efficiency and the accuracy for the information retrieval.

The first presented is the k-nearest neighbor decision rule, which classify an unknown object to the class most heavily presented in its nearest neighbors.

 ~Godfried Teaching Projects.Pr.98 Sergei Figure Figure2

The Voronoi diagram is a partition of space into regions, within which all points are closer to some particular node than to any other node.

 ~Godfried Teaching Projects.Pr.98 Sergei Figure Figure3

The Dalaunay triangulation is a dual of the Voronoi diagram. It two Voronoi regions share a boundary the nodes of those regions are connected with an edge. These nodes are called the Delaunay neighbors.

 ~Godfried Teaching Projects.Pr.98 Sergei Figure Figure4

The Gabriel graph of a series of points is a subgraph of Delaunay triangulation for that set. Gabriel editing reduces the size of the training set even more than Voronoi editing.

 ~Godfried Teaching Projects.Pr.98 Sergei Figure Figure10

Tags: ,

Leave a Reply