Skip to content

Who Invented Support Vector Machine?

Isabelle Guyon worked with Vladimir Naumovich Vapnik in the invention of Support Vector Machines and she is known for her great contributions to SVMs and Neural Networks.

Support Vector Machine Original Paper​

1- SVM History

Support Vector Machine Algorithm was invented by  Vladimir N. Vapnik, Bernhard E Boser and Isabelle M Guyon while they were working at AT&T Bell Laboratories in 1992.

A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented The technique is applicable to a wide variety of classiaction functions including Perceptrons polynomials and Radial Basis Functions The effective number of parameters is adjusted automatically to match the complexity of the problem The solution is expressed as a linear combination of supporting patterns These are the subset of training patterns that are closest to the decision boundary Bounds on the generalization performance based on the leave one outmethod and the VC dimension are given Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning

Support Vector Machine algorithm Original Paper below:

In 1993 Guyon & Boser co-authored another breakthrough paper with Vapnik while they are working together at AT&T Bell Labs in San Francisco. This research paper is titled: Automatic capacity tuning of very large VC-dimension classifiers and can be accessed here. Boser would move on to work at Berkeley University, California from his job at AT&T Bell Labs before the paper was published.

2- Support Vector Cluster

An extension to the SVM was introduced in 2001 by Asa Ben-Hur, David Horn and Hava T. Siegelmann. In their paper: Support Vector Clustering, authors published a clustering version of the machine learning algorithm that works based on unsupervised learning. You can find their paper below:

3- Hierarchical Support Vector Machines

In 2004 Yangchi Chen and Melba M. Crawford introduced Hierarchical SVM in their paper: Integrating Support Vector Machines in a Hierarchical Output Space Decomposition Framework. HSVM is a hybrid implementation where a decision tree uses support vector machine in each of its nodes.

Increasing the application areas of support vector machines, HSVM paper advanced pattern recognition with SVM in applications like fraud detection, market prediction and computer vision. The HSVM paper can be found in the link below:

Summary

In this article we learned the history of Support Vector Machine Algorithm. SVM models are very versatile and can be used for a number of different type of projects. They can do classification, regression and even clustering. They can be used with linear as well as non-linear data. They can be used with a number of kernels.

We have also covered a number of extensions on Support Vector Machines and their founders such as:

  • Vladimir Vapnik et al. – 1992: A training algorithm for optimal margin classifiers
  • Asa Ben Hur et al. – 2001: Support Vector Clustering
  • Yangchi Chen et al. – 2004: Hierarchical SVM

If this customization side of support vector machines is appealing to you then there might be a place in the machine learning toolbox for support vector machine skills. If you need a guide about optimization of SVM (which is often crucial) you can see the tutorial below: