Support Vector Machine Tuning
Support Vector Machine is an algorithm with many options and parameters to adjust. Furthermore, tuning SVM hyperparameters correctly is vital for its reliability and performance.
In this Support Vector Machine tutorial we will cover some of the most crucial settings you can make to have an SVM model running ideally.
How to Construct?
Probably the most fundamental parameter in an SVM model that should be selected carefully. It’s also one of the difficult parameters to tune if you don’t have data science or statistical experience.
Kernel selection boils down to the linearity of your dataset and choosing a kernel that best works with the kind of dataset in hand. Options are below and default value is rbf.
If data is linear choosing linear makes sense if data has polynomial relation, poly will be a good option while rbf is used to handle more complex relations in data and sigmoid is used to replicate neural network like implementations.
You can see more information regarding kernels in the article below:
You might also be interested in choosing a LinearSVM model rather than going with a regular SVM model with “linear” kernel option as this can offer significant benefits and improved results. You can find information regarding choosing different SVM models in the tutorial below:
Otherwise here is a simple Python code showing how to specify kernels with Support Vector Machine models using Scikit-Learn.
SVM = SVC(kernel="poly")
Penalty Parameter for Regularization
This is the penalty parameter which can be used for regularization of the model. It only takes positive values and is 1 by default.
If C is increased mispredictions will be penalized more resulting in a model that fits more tightly. If you increase C parameter too much chances are quite high that you will end up with an overfitting SVM model.
from sklearn.svm import SVC
SVM = SVC(C=1.5)
How to Construct?
If you select polynomial kernel poly you will have the option to specify its degree using degree parameter. Here is a Python code example:
SVM = SVC(degree=3)
How to Construct?
Gamma controls the effects of each support vector in an SVM model.
By default gamma’s value used to be auto which would equal to a gamma value of 1/feature_size but now in Scikit-Learn gamma’s default value is scale which equals
SVM = SVC(gamma=2)
Adjust the size of kernel caching
This parameter can be used to specify kernel cache size in megabytes. By default it is 200 in Scikit-Learn. Can be adjusted depending on hardware resources.
SVM = SVC(cache_size=5000)
Information regarding model's operation
This parameter is used to make the model produce reports in the console while running. It is False by default and can be switched on by assigning True to verbose parameter. Here is a Python code for example:
SVM = SVC(verbose=True)
In May 2016, National University of Taiwan’s C.J. Lin et. al. published a very useful guide about building Support Vector Models. This team is also behind the development of solvers Liblinear and Libsvm used by most Support Vector Machine algorithm implementations as default today.
You can check out the original Guide here:
In their words, “A Practical Guide to Support Vector Classification” is a cookbook for the novice Machine Learning practitioner:
Here we outline a “cookbook” approach which usually gives reasonable results.
Note that this guide is not for SVM researchers nor do we guarantee you will
achieve the highest accuracy.
Our purpose is to give SVM novices a recipe for rapidly obtaining
acceptable results. A classification task usually involves separating data into training and testing sets.
- Estimator is libsvm by default
- Uses One vs One multiclass reduction
- Estimator is liblinear by default
- Uses One vs All multiclass reduction