Regression trees are a nonparametric regression method that creates a binary tree by recursively splitting the data on the predictor values.

The splits are selected so that the two child nodes have smaller variability around their average value than the parent node. Various options are used to control how deep the tree is grown. Regression predictions for an observation are based on the mean value of all the responses in the terminal node.

The **Predictor** columns
can be either numeric or character (provided there are not more then 31
unique character values in any one character column). There is no need
for making transformations of the response or predictor columns; the same
tree is grown for any monotone transformations of the data.

See also:

Details on Regression Modeling – General