Support Vector Machines - Cross-Validation Tab
Select the Cross-validation tab of the Support Vector Machines dialog to access options to apply the cross-validation algorithm in order to obtain estimates of the training parameters, which are displayed on the SVM tab. Although you can specify these training parameters on the SVM tab, it is often the case that little is known about their best values. The process of cross-validation can provide you with estimates.
The general idea of this method is to divide the overall sample into a number of v folds (randomly drawn disjoint sub-samples). The same type of SVM analysis is then successively applied to the observations belonging to the v-1 folds (which constitutes the cross-validation training sample), and the results of the analyses are applied to sample v (the sample or fold that was not used to fit the SVM model; i.e., this is the testing sample) to compute the error usually defined as the sum-of-squared (this error quantifies how well the observations in sample v can be predicted by the SVM model). The results for the v replications are averaged to yield a single measure model error of the stability of the respective model, i.e., the validity of the model for predicting unseen data.
Minimum. Specify the minimum value of the training parameters to start with.
Maximum. Specify the maximum value of the training parameters to search for.
Increment. Specify the increase in the value of the training parameters when searching is performed.