subsampwinner packages a feature-selection workflow motivated by instability in finite-sample model fitting. The main interest is not just finding a sparse set of predictors, but understanding how selections behave across repeated perturbations of the observed data.
The project sits at the intersection of statistical learning, reproducibility, and practical tooling. It reflects a broader theme in my work: methods are only useful if their behavior is legible under perturbation, noise, and imperfect data pipelines.