Classification Trees Button
Click the button to display the Classification Trees Startup Panel. The Classification Trees module is used to predict membership of cases or objects (i.e., classify cases) in the classes of a categorical dependent variable from their measurements on one or more predictor variables.
The Classification Trees module provides a comprehensive implementation of the most recently developed algorithms for efficiently producing and testing the robustness of classification trees. Classification trees can be produced using categorical predictor variables, ordered predictor variables, or both, and using univariate splits or linear combination splits. STATISTICA includes options for performing exhaustive splits (as in THAID and C & RT) or discriminant-based splits, unbiased variable selection (as in QUEST), direct stopping rules (as in FACT) or bottom-up pruning (as in C & RT), pruning based on misclassification rates or on the deviance function, generalized Chi-square, G-square, or Gini-index goodness-of-fit measures. Priors and misclassification costs can be specified as equal, estimated from the data, or user specified. You can also specify the v value for v-fold cross-validation during tree building, v value for v-fold cross-validation for error estimation, size of the SE rule, minimum node size before pruning, seeds for random number generation, and Alpha value for variable selection. Integrated graphics options are provided to explore the input and output data.
Advanced methods for tree classifications, including flexible options for model building and interactive tools to explore the trees, are also available in the General Classification/Regression Tree Models (GTrees) and General CHAID (Chi-square Automatic Interaction Detection) Models facilities.