Boosting techniques can enhance the model's prediction accuracy by a significant amount of factors. The AdaBoost algorithm is used as an ensemble method. This boosting re-assigns all weights to each iteration, with larger weights given to inaccurately classified models, and it fits the sequence of base classifiers. The outcome is identical to the base model (Decision Tree) above, which I believe is because Adaboost begins by making predictions in simple ways on the original dataset, and then assigns equal weights to each observation. If the initial learner's prediction is incorrect, it assigns a higher priority to the inaccurately predicted item and initiates an iterative loop. I evaluated with the learning rate and discovered that the lower the learning rate, the worse the accuracy. The XGBoost algorithm is the most powerful, as it performs with 100% accuracy. By incorporating several regularization strategies, XGBoost lowers overfitting and increases overall performance.
The lab simply reviewed two common ensemble techniques - bagging and boosting - and allowed learners to compare and contrast them. The technical aspects of the AdaBoost and XGBoost algorithms were explored, as well as the approach, which was briefly tried and the result of each techniques. The outcomes of each technique can be seen below, with XGboost coming out on top, followed by the bagging technique.