2022 journal article

A Double Penalty Model for Ensemble Learning

Mathematics.

By: W. Wang* & Y. Zhou n

author keywords: double penalty model; interpretability; partially linear model; separability
TL;DR: By considering ensemble learning for two learning ensemble components as a double penalty model, this work provides a framework to better understand the relative convergence and identifiability of the two components. (via Semantic Scholar)
Source: ORCID
Added: December 1, 2022

Modern statistical learning techniques often include learning ensembles, for which the combination of multiple separate prediction procedures (ensemble components) can improve prediction accuracy. Although ensemble approaches are widely used, work remains to improve our understanding of the theoretical underpinnings of aspects such as identifiability and relative convergence rates of the ensemble components. By considering ensemble learning for two learning ensemble components as a double penalty model, we provide a framework to better understand the relative convergence and identifiability of the two components. In addition, with appropriate conditions the framework provides convergence guarantees for a form of residual stacking when iterating between the two components as a cyclic coordinate ascent procedure. We conduct numerical experiments on three synthetic simulations and two real world datasets to illustrate the performance of our approach, and justify our theory.