Gradient Boosting
Ensembles and boosting
An ensemble is a set of models for solving the same problem. The strength of ensembles is that the mean error of a group of models is less significant than their individual errors.
Another approach to ensemble building is boosting, where each subsequent model takes into account the errors of the previous one and, in the final prediction, the forecasts of basic learners. Take a look:
where is the ensemble prediction, is the number of base learners, is the base learner prediction, and is the model weight.
For example, we are dealing with a regression task. We have observations with features and correct answers . Our task is to minimize the MSE loss function:
For convenience, equate the model weights to unity:
We get:
Now we create an ensemble of sequential models.
First, build the base learner by solving the minimization task:
The result is this ensemble:
Indicate the residual. It is the difference between the prediction at the first step and the correct answers:
At the second step, we build the model like this:
The ensemble will take the following form:
At each subsequent step, the algorithm minimizes the ensemble error from the preceding step.
Let's summarize the formulas. At step , the residual is calculated as follows:
The ensemble itself is represented as the sum of predictions of all the base learners combined up to this step:
So, at step , the algorithm will pick the model with the ensemble error at step :
Gradient boosting
If our loss function is , and it has a derivative. Let’s recall the ensemble formula:
At each step, select the answers that will minimize the function:
Minimize the function with gradient descent. To do so, at each step, calculate the negative gradient of the loss function for prediction :
To push the predictions towards correct answers, the base learner learns to predict :
Obtain the weight for from the minimization task by iterating various numbers:
It is the coefficient for the base learner that helps adjust the ensemble to make predictions as accurate as possible.
Gradient boosting is suitable for different loss functions that have derivatives — for example, the mean square in a regression task or logarithmic in a binary classification task.
Gradient boosting regularization
Regularization can be used to reduce overfitting during gradient boosting. If the weights in a linear regression have been reduced, then the gradient boosting regularization is:
- step size reduction
- adjustment of tree parameters
- subsample randomization for base learners .
Reduce the step size. Revise the formula for calculating predictions at step :
Introduce the coefficient. It controls the learning rate and can be used to reduce the step size:
The value for this coefficient is picked by iterating over different values in the range from 0 to 1. A smaller value means a smaller step towards the negative gradient and a higher accuracy of the ensemble. But if the learning rate is too low, the training process will take too long.
Another way to regularize gradient boosting is to adjust tree parameters. We can limit the tree depth or number of elements in each node, try different values, and see how it affects the result.
A third method of regularization is working with subsamples. The algorithm works with subsamples instead of the whole set. This algorithm version is similar to SGD and is called stochastic gradient boosting.
Libraries for gradient boosting
- XGBoost (extreme gradient boosting) is a popular gradient boosting library on Kaggle. Open source. Released in 2014.
- LightGBM (light gradient boosting machine). Developed by Microsoft. Fast and accurate gradient boosting training. Directly works with categorical features. Released in 2017. Comparison with XGBoost: https://lightgbm.readthedocs.io/en/latest/Experiments.html
- CatBoost (categorical boosting). Developed by Yandex. Superior to other algorithms in terms of evaluation metrics. Applies various encoding techniques for categorical features (LabelEncoding, One-Hot Encoding). Released in 2017. Comparison with XGBoost and LightGBM: https://catboost.ai/#benchmark
Import CatBoostClassifier
from the library and create a model. Since we have a classification problem, specify the logistic loss function. Take 10 iterations so that we don't have to wait too long.
1from catboost import CatBoostClassifier23model = CatBoostClassifier(loss_function="Logloss", iterations=10)
Train the model with the fit()
method. In addition to target and features, pass the categorical features to the model:
1# cat_features - categorical features23model.fit(features_train, target_train, cat_features=cat_features)
When we have many iterations and don't want to output information for each one, use the verbose
argument:
1model = CatBoostClassifier(loss_function="Logloss", iterations=50)2model.fit(features_train, target_train, cat_features=cat_features, verbose=10)
Calculate the prediction with predict()
1pred_valid = model.predict(features_valid)